Dec  4 04:34:52 np0005545273 kernel: Linux version 5.14.0-645.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025
Dec  4 04:34:52 np0005545273 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec  4 04:34:52 np0005545273 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  4 04:34:52 np0005545273 kernel: BIOS-provided physical RAM map:
Dec  4 04:34:52 np0005545273 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec  4 04:34:52 np0005545273 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec  4 04:34:52 np0005545273 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec  4 04:34:52 np0005545273 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec  4 04:34:52 np0005545273 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec  4 04:34:52 np0005545273 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec  4 04:34:52 np0005545273 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec  4 04:34:52 np0005545273 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Dec  4 04:34:52 np0005545273 kernel: NX (Execute Disable) protection: active
Dec  4 04:34:52 np0005545273 kernel: APIC: Static calls initialized
Dec  4 04:34:52 np0005545273 kernel: SMBIOS 2.8 present.
Dec  4 04:34:52 np0005545273 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec  4 04:34:52 np0005545273 kernel: Hypervisor detected: KVM
Dec  4 04:34:52 np0005545273 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec  4 04:34:52 np0005545273 kernel: kvm-clock: using sched offset of 3311035341 cycles
Dec  4 04:34:52 np0005545273 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec  4 04:34:52 np0005545273 kernel: tsc: Detected 2799.998 MHz processor
Dec  4 04:34:52 np0005545273 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Dec  4 04:34:52 np0005545273 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec  4 04:34:52 np0005545273 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec  4 04:34:52 np0005545273 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec  4 04:34:52 np0005545273 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec  4 04:34:52 np0005545273 kernel: Using GB pages for direct mapping
Dec  4 04:34:52 np0005545273 kernel: RAMDISK: [mem 0x2d472000-0x32a30fff]
Dec  4 04:34:52 np0005545273 kernel: ACPI: Early table checksum verification disabled
Dec  4 04:34:52 np0005545273 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec  4 04:34:52 np0005545273 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  4 04:34:52 np0005545273 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  4 04:34:52 np0005545273 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  4 04:34:52 np0005545273 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec  4 04:34:52 np0005545273 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  4 04:34:52 np0005545273 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  4 04:34:52 np0005545273 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec  4 04:34:52 np0005545273 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec  4 04:34:52 np0005545273 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec  4 04:34:52 np0005545273 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec  4 04:34:52 np0005545273 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec  4 04:34:52 np0005545273 kernel: No NUMA configuration found
Dec  4 04:34:52 np0005545273 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Dec  4 04:34:52 np0005545273 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Dec  4 04:34:52 np0005545273 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Dec  4 04:34:52 np0005545273 kernel: Zone ranges:
Dec  4 04:34:52 np0005545273 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec  4 04:34:52 np0005545273 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec  4 04:34:52 np0005545273 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Dec  4 04:34:52 np0005545273 kernel:  Device   empty
Dec  4 04:34:52 np0005545273 kernel: Movable zone start for each node
Dec  4 04:34:52 np0005545273 kernel: Early memory node ranges
Dec  4 04:34:52 np0005545273 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec  4 04:34:52 np0005545273 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec  4 04:34:52 np0005545273 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Dec  4 04:34:52 np0005545273 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Dec  4 04:34:52 np0005545273 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec  4 04:34:52 np0005545273 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec  4 04:34:52 np0005545273 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec  4 04:34:52 np0005545273 kernel: ACPI: PM-Timer IO Port: 0x608
Dec  4 04:34:52 np0005545273 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec  4 04:34:52 np0005545273 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec  4 04:34:52 np0005545273 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec  4 04:34:52 np0005545273 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec  4 04:34:52 np0005545273 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec  4 04:34:52 np0005545273 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec  4 04:34:52 np0005545273 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec  4 04:34:52 np0005545273 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec  4 04:34:52 np0005545273 kernel: TSC deadline timer available
Dec  4 04:34:52 np0005545273 kernel: CPU topo: Max. logical packages:   8
Dec  4 04:34:52 np0005545273 kernel: CPU topo: Max. logical dies:       8
Dec  4 04:34:52 np0005545273 kernel: CPU topo: Max. dies per package:   1
Dec  4 04:34:52 np0005545273 kernel: CPU topo: Max. threads per core:   1
Dec  4 04:34:52 np0005545273 kernel: CPU topo: Num. cores per package:     1
Dec  4 04:34:52 np0005545273 kernel: CPU topo: Num. threads per package:   1
Dec  4 04:34:52 np0005545273 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Dec  4 04:34:52 np0005545273 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec  4 04:34:52 np0005545273 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec  4 04:34:52 np0005545273 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec  4 04:34:52 np0005545273 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec  4 04:34:52 np0005545273 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec  4 04:34:52 np0005545273 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec  4 04:34:52 np0005545273 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec  4 04:34:52 np0005545273 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec  4 04:34:52 np0005545273 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec  4 04:34:52 np0005545273 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec  4 04:34:52 np0005545273 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec  4 04:34:52 np0005545273 kernel: Booting paravirtualized kernel on KVM
Dec  4 04:34:52 np0005545273 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec  4 04:34:52 np0005545273 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec  4 04:34:52 np0005545273 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Dec  4 04:34:52 np0005545273 kernel: kvm-guest: PV spinlocks disabled, no host support
Dec  4 04:34:52 np0005545273 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  4 04:34:52 np0005545273 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64", will be passed to user space.
Dec  4 04:34:52 np0005545273 kernel: random: crng init done
Dec  4 04:34:52 np0005545273 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec  4 04:34:52 np0005545273 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec  4 04:34:52 np0005545273 kernel: Fallback order for Node 0: 0 
Dec  4 04:34:52 np0005545273 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec  4 04:34:52 np0005545273 kernel: Policy zone: Normal
Dec  4 04:34:52 np0005545273 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec  4 04:34:52 np0005545273 kernel: software IO TLB: area num 8.
Dec  4 04:34:52 np0005545273 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec  4 04:34:52 np0005545273 kernel: ftrace: allocating 49335 entries in 193 pages
Dec  4 04:34:52 np0005545273 kernel: ftrace: allocated 193 pages with 3 groups
Dec  4 04:34:52 np0005545273 kernel: Dynamic Preempt: voluntary
Dec  4 04:34:52 np0005545273 kernel: rcu: Preemptible hierarchical RCU implementation.
Dec  4 04:34:52 np0005545273 kernel: rcu: #011RCU event tracing is enabled.
Dec  4 04:34:52 np0005545273 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec  4 04:34:52 np0005545273 kernel: #011Trampoline variant of Tasks RCU enabled.
Dec  4 04:34:52 np0005545273 kernel: #011Rude variant of Tasks RCU enabled.
Dec  4 04:34:52 np0005545273 kernel: #011Tracing variant of Tasks RCU enabled.
Dec  4 04:34:52 np0005545273 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec  4 04:34:52 np0005545273 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec  4 04:34:52 np0005545273 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  4 04:34:52 np0005545273 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  4 04:34:52 np0005545273 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  4 04:34:52 np0005545273 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec  4 04:34:52 np0005545273 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec  4 04:34:52 np0005545273 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec  4 04:34:52 np0005545273 kernel: Console: colour VGA+ 80x25
Dec  4 04:34:52 np0005545273 kernel: printk: console [ttyS0] enabled
Dec  4 04:34:52 np0005545273 kernel: ACPI: Core revision 20230331
Dec  4 04:34:52 np0005545273 kernel: APIC: Switch to symmetric I/O mode setup
Dec  4 04:34:52 np0005545273 kernel: x2apic enabled
Dec  4 04:34:52 np0005545273 kernel: APIC: Switched APIC routing to: physical x2apic
Dec  4 04:34:52 np0005545273 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec  4 04:34:52 np0005545273 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Dec  4 04:34:52 np0005545273 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec  4 04:34:52 np0005545273 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec  4 04:34:52 np0005545273 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec  4 04:34:52 np0005545273 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec  4 04:34:52 np0005545273 kernel: Spectre V2 : Mitigation: Retpolines
Dec  4 04:34:52 np0005545273 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec  4 04:34:52 np0005545273 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec  4 04:34:52 np0005545273 kernel: RETBleed: Mitigation: untrained return thunk
Dec  4 04:34:52 np0005545273 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec  4 04:34:52 np0005545273 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec  4 04:34:52 np0005545273 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec  4 04:34:52 np0005545273 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec  4 04:34:52 np0005545273 kernel: x86/bugs: return thunk changed
Dec  4 04:34:52 np0005545273 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec  4 04:34:52 np0005545273 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec  4 04:34:52 np0005545273 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec  4 04:34:52 np0005545273 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec  4 04:34:52 np0005545273 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec  4 04:34:52 np0005545273 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec  4 04:34:52 np0005545273 kernel: Freeing SMP alternatives memory: 40K
Dec  4 04:34:52 np0005545273 kernel: pid_max: default: 32768 minimum: 301
Dec  4 04:34:52 np0005545273 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec  4 04:34:52 np0005545273 kernel: landlock: Up and running.
Dec  4 04:34:52 np0005545273 kernel: Yama: becoming mindful.
Dec  4 04:34:52 np0005545273 kernel: SELinux:  Initializing.
Dec  4 04:34:52 np0005545273 kernel: LSM support for eBPF active
Dec  4 04:34:52 np0005545273 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec  4 04:34:52 np0005545273 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec  4 04:34:52 np0005545273 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec  4 04:34:52 np0005545273 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec  4 04:34:52 np0005545273 kernel: ... version:                0
Dec  4 04:34:52 np0005545273 kernel: ... bit width:              48
Dec  4 04:34:52 np0005545273 kernel: ... generic registers:      6
Dec  4 04:34:52 np0005545273 kernel: ... value mask:             0000ffffffffffff
Dec  4 04:34:52 np0005545273 kernel: ... max period:             00007fffffffffff
Dec  4 04:34:52 np0005545273 kernel: ... fixed-purpose events:   0
Dec  4 04:34:52 np0005545273 kernel: ... event mask:             000000000000003f
Dec  4 04:34:52 np0005545273 kernel: signal: max sigframe size: 1776
Dec  4 04:34:52 np0005545273 kernel: rcu: Hierarchical SRCU implementation.
Dec  4 04:34:52 np0005545273 kernel: rcu: #011Max phase no-delay instances is 400.
Dec  4 04:34:52 np0005545273 kernel: smp: Bringing up secondary CPUs ...
Dec  4 04:34:52 np0005545273 kernel: smpboot: x86: Booting SMP configuration:
Dec  4 04:34:52 np0005545273 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec  4 04:34:52 np0005545273 kernel: smp: Brought up 1 node, 8 CPUs
Dec  4 04:34:52 np0005545273 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Dec  4 04:34:52 np0005545273 kernel: node 0 deferred pages initialised in 9ms
Dec  4 04:34:52 np0005545273 kernel: Memory: 7763872K/8388068K available (16384K kernel code, 5795K rwdata, 13908K rodata, 4196K init, 7156K bss, 618204K reserved, 0K cma-reserved)
Dec  4 04:34:52 np0005545273 kernel: devtmpfs: initialized
Dec  4 04:34:52 np0005545273 kernel: x86/mm: Memory block size: 128MB
Dec  4 04:34:52 np0005545273 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec  4 04:34:52 np0005545273 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Dec  4 04:34:52 np0005545273 kernel: pinctrl core: initialized pinctrl subsystem
Dec  4 04:34:52 np0005545273 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec  4 04:34:52 np0005545273 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec  4 04:34:52 np0005545273 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec  4 04:34:52 np0005545273 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec  4 04:34:52 np0005545273 kernel: audit: initializing netlink subsys (disabled)
Dec  4 04:34:52 np0005545273 kernel: audit: type=2000 audit(1764840890.273:1): state=initialized audit_enabled=0 res=1
Dec  4 04:34:52 np0005545273 kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec  4 04:34:52 np0005545273 kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec  4 04:34:52 np0005545273 kernel: thermal_sys: Registered thermal governor 'user_space'
Dec  4 04:34:52 np0005545273 kernel: cpuidle: using governor menu
Dec  4 04:34:52 np0005545273 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec  4 04:34:52 np0005545273 kernel: PCI: Using configuration type 1 for base access
Dec  4 04:34:52 np0005545273 kernel: PCI: Using configuration type 1 for extended access
Dec  4 04:34:52 np0005545273 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec  4 04:34:52 np0005545273 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec  4 04:34:52 np0005545273 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec  4 04:34:52 np0005545273 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec  4 04:34:52 np0005545273 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec  4 04:34:52 np0005545273 kernel: Demotion targets for Node 0: null
Dec  4 04:34:52 np0005545273 kernel: cryptd: max_cpu_qlen set to 1000
Dec  4 04:34:52 np0005545273 kernel: ACPI: Added _OSI(Module Device)
Dec  4 04:34:52 np0005545273 kernel: ACPI: Added _OSI(Processor Device)
Dec  4 04:34:52 np0005545273 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec  4 04:34:52 np0005545273 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec  4 04:34:52 np0005545273 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec  4 04:34:52 np0005545273 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec  4 04:34:52 np0005545273 kernel: ACPI: Interpreter enabled
Dec  4 04:34:52 np0005545273 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec  4 04:34:52 np0005545273 kernel: ACPI: Using IOAPIC for interrupt routing
Dec  4 04:34:52 np0005545273 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec  4 04:34:52 np0005545273 kernel: PCI: Using E820 reservations for host bridge windows
Dec  4 04:34:52 np0005545273 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec  4 04:34:52 np0005545273 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec  4 04:34:52 np0005545273 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec  4 04:34:52 np0005545273 kernel: acpiphp: Slot [3] registered
Dec  4 04:34:52 np0005545273 kernel: acpiphp: Slot [4] registered
Dec  4 04:34:52 np0005545273 kernel: acpiphp: Slot [5] registered
Dec  4 04:34:52 np0005545273 kernel: acpiphp: Slot [6] registered
Dec  4 04:34:52 np0005545273 kernel: acpiphp: Slot [7] registered
Dec  4 04:34:52 np0005545273 kernel: acpiphp: Slot [8] registered
Dec  4 04:34:52 np0005545273 kernel: acpiphp: Slot [9] registered
Dec  4 04:34:52 np0005545273 kernel: acpiphp: Slot [10] registered
Dec  4 04:34:52 np0005545273 kernel: acpiphp: Slot [11] registered
Dec  4 04:34:52 np0005545273 kernel: acpiphp: Slot [12] registered
Dec  4 04:34:52 np0005545273 kernel: acpiphp: Slot [13] registered
Dec  4 04:34:52 np0005545273 kernel: acpiphp: Slot [14] registered
Dec  4 04:34:52 np0005545273 kernel: acpiphp: Slot [15] registered
Dec  4 04:34:52 np0005545273 kernel: acpiphp: Slot [16] registered
Dec  4 04:34:52 np0005545273 kernel: acpiphp: Slot [17] registered
Dec  4 04:34:52 np0005545273 kernel: acpiphp: Slot [18] registered
Dec  4 04:34:52 np0005545273 kernel: acpiphp: Slot [19] registered
Dec  4 04:34:52 np0005545273 kernel: acpiphp: Slot [20] registered
Dec  4 04:34:52 np0005545273 kernel: acpiphp: Slot [21] registered
Dec  4 04:34:52 np0005545273 kernel: acpiphp: Slot [22] registered
Dec  4 04:34:52 np0005545273 kernel: acpiphp: Slot [23] registered
Dec  4 04:34:52 np0005545273 kernel: acpiphp: Slot [24] registered
Dec  4 04:34:52 np0005545273 kernel: acpiphp: Slot [25] registered
Dec  4 04:34:52 np0005545273 kernel: acpiphp: Slot [26] registered
Dec  4 04:34:52 np0005545273 kernel: acpiphp: Slot [27] registered
Dec  4 04:34:52 np0005545273 kernel: acpiphp: Slot [28] registered
Dec  4 04:34:52 np0005545273 kernel: acpiphp: Slot [29] registered
Dec  4 04:34:52 np0005545273 kernel: acpiphp: Slot [30] registered
Dec  4 04:34:52 np0005545273 kernel: acpiphp: Slot [31] registered
Dec  4 04:34:52 np0005545273 kernel: PCI host bridge to bus 0000:00
Dec  4 04:34:52 np0005545273 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec  4 04:34:52 np0005545273 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec  4 04:34:52 np0005545273 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec  4 04:34:52 np0005545273 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec  4 04:34:52 np0005545273 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Dec  4 04:34:52 np0005545273 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Dec  4 04:34:52 np0005545273 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec  4 04:34:52 np0005545273 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec  4 04:34:52 np0005545273 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec  4 04:34:52 np0005545273 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec  4 04:34:52 np0005545273 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec  4 04:34:52 np0005545273 kernel: iommu: Default domain type: Translated
Dec  4 04:34:52 np0005545273 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec  4 04:34:52 np0005545273 kernel: SCSI subsystem initialized
Dec  4 04:34:52 np0005545273 kernel: ACPI: bus type USB registered
Dec  4 04:34:52 np0005545273 kernel: usbcore: registered new interface driver usbfs
Dec  4 04:34:52 np0005545273 kernel: usbcore: registered new interface driver hub
Dec  4 04:34:52 np0005545273 kernel: usbcore: registered new device driver usb
Dec  4 04:34:52 np0005545273 kernel: pps_core: LinuxPPS API ver. 1 registered
Dec  4 04:34:52 np0005545273 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec  4 04:34:52 np0005545273 kernel: PTP clock support registered
Dec  4 04:34:52 np0005545273 kernel: EDAC MC: Ver: 3.0.0
Dec  4 04:34:52 np0005545273 kernel: NetLabel: Initializing
Dec  4 04:34:52 np0005545273 kernel: NetLabel:  domain hash size = 128
Dec  4 04:34:52 np0005545273 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec  4 04:34:52 np0005545273 kernel: NetLabel:  unlabeled traffic allowed by default
Dec  4 04:34:52 np0005545273 kernel: PCI: Using ACPI for IRQ routing
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec  4 04:34:52 np0005545273 kernel: vgaarb: loaded
Dec  4 04:34:52 np0005545273 kernel: clocksource: Switched to clocksource kvm-clock
Dec  4 04:34:52 np0005545273 kernel: VFS: Disk quotas dquot_6.6.0
Dec  4 04:34:52 np0005545273 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec  4 04:34:52 np0005545273 kernel: pnp: PnP ACPI init
Dec  4 04:34:52 np0005545273 kernel: pnp: PnP ACPI: found 5 devices
Dec  4 04:34:52 np0005545273 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec  4 04:34:52 np0005545273 kernel: NET: Registered PF_INET protocol family
Dec  4 04:34:52 np0005545273 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec  4 04:34:52 np0005545273 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec  4 04:34:52 np0005545273 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec  4 04:34:52 np0005545273 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec  4 04:34:52 np0005545273 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec  4 04:34:52 np0005545273 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec  4 04:34:52 np0005545273 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec  4 04:34:52 np0005545273 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec  4 04:34:52 np0005545273 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec  4 04:34:52 np0005545273 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec  4 04:34:52 np0005545273 kernel: NET: Registered PF_XDP protocol family
Dec  4 04:34:52 np0005545273 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec  4 04:34:52 np0005545273 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec  4 04:34:52 np0005545273 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec  4 04:34:52 np0005545273 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec  4 04:34:52 np0005545273 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec  4 04:34:52 np0005545273 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec  4 04:34:52 np0005545273 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 72413 usecs
Dec  4 04:34:52 np0005545273 kernel: PCI: CLS 0 bytes, default 64
Dec  4 04:34:52 np0005545273 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec  4 04:34:52 np0005545273 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec  4 04:34:52 np0005545273 kernel: ACPI: bus type thunderbolt registered
Dec  4 04:34:52 np0005545273 kernel: Trying to unpack rootfs image as initramfs...
Dec  4 04:34:52 np0005545273 kernel: Initialise system trusted keyrings
Dec  4 04:34:52 np0005545273 kernel: Key type blacklist registered
Dec  4 04:34:52 np0005545273 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec  4 04:34:52 np0005545273 kernel: zbud: loaded
Dec  4 04:34:52 np0005545273 kernel: integrity: Platform Keyring initialized
Dec  4 04:34:52 np0005545273 kernel: integrity: Machine keyring initialized
Dec  4 04:34:52 np0005545273 kernel: Freeing initrd memory: 87804K
Dec  4 04:34:52 np0005545273 kernel: NET: Registered PF_ALG protocol family
Dec  4 04:34:52 np0005545273 kernel: xor: automatically using best checksumming function   avx       
Dec  4 04:34:52 np0005545273 kernel: Key type asymmetric registered
Dec  4 04:34:52 np0005545273 kernel: Asymmetric key parser 'x509' registered
Dec  4 04:34:52 np0005545273 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec  4 04:34:52 np0005545273 kernel: io scheduler mq-deadline registered
Dec  4 04:34:52 np0005545273 kernel: io scheduler kyber registered
Dec  4 04:34:52 np0005545273 kernel: io scheduler bfq registered
Dec  4 04:34:52 np0005545273 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec  4 04:34:52 np0005545273 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec  4 04:34:52 np0005545273 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec  4 04:34:52 np0005545273 kernel: ACPI: button: Power Button [PWRF]
Dec  4 04:34:52 np0005545273 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec  4 04:34:52 np0005545273 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec  4 04:34:52 np0005545273 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec  4 04:34:52 np0005545273 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec  4 04:34:52 np0005545273 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec  4 04:34:52 np0005545273 kernel: Non-volatile memory driver v1.3
Dec  4 04:34:52 np0005545273 kernel: rdac: device handler registered
Dec  4 04:34:52 np0005545273 kernel: hp_sw: device handler registered
Dec  4 04:34:52 np0005545273 kernel: emc: device handler registered
Dec  4 04:34:52 np0005545273 kernel: alua: device handler registered
Dec  4 04:34:52 np0005545273 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec  4 04:34:52 np0005545273 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec  4 04:34:52 np0005545273 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec  4 04:34:52 np0005545273 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec  4 04:34:52 np0005545273 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec  4 04:34:52 np0005545273 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec  4 04:34:52 np0005545273 kernel: usb usb1: Product: UHCI Host Controller
Dec  4 04:34:52 np0005545273 kernel: usb usb1: Manufacturer: Linux 5.14.0-645.el9.x86_64 uhci_hcd
Dec  4 04:34:52 np0005545273 kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec  4 04:34:52 np0005545273 kernel: hub 1-0:1.0: USB hub found
Dec  4 04:34:52 np0005545273 kernel: hub 1-0:1.0: 2 ports detected
Dec  4 04:34:52 np0005545273 kernel: usbcore: registered new interface driver usbserial_generic
Dec  4 04:34:52 np0005545273 kernel: usbserial: USB Serial support registered for generic
Dec  4 04:34:52 np0005545273 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec  4 04:34:52 np0005545273 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec  4 04:34:52 np0005545273 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec  4 04:34:52 np0005545273 kernel: mousedev: PS/2 mouse device common for all mice
Dec  4 04:34:52 np0005545273 kernel: rtc_cmos 00:04: RTC can wake from S4
Dec  4 04:34:52 np0005545273 kernel: rtc_cmos 00:04: registered as rtc0
Dec  4 04:34:52 np0005545273 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec  4 04:34:52 np0005545273 kernel: rtc_cmos 00:04: setting system clock to 2025-12-04T09:34:51 UTC (1764840891)
Dec  4 04:34:52 np0005545273 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec  4 04:34:52 np0005545273 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec  4 04:34:52 np0005545273 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec  4 04:34:52 np0005545273 kernel: hid: raw HID events driver (C) Jiri Kosina
Dec  4 04:34:52 np0005545273 kernel: usbcore: registered new interface driver usbhid
Dec  4 04:34:52 np0005545273 kernel: usbhid: USB HID core driver
Dec  4 04:34:52 np0005545273 kernel: drop_monitor: Initializing network drop monitor service
Dec  4 04:34:52 np0005545273 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec  4 04:34:52 np0005545273 kernel: Initializing XFRM netlink socket
Dec  4 04:34:52 np0005545273 kernel: NET: Registered PF_INET6 protocol family
Dec  4 04:34:52 np0005545273 kernel: Segment Routing with IPv6
Dec  4 04:34:52 np0005545273 kernel: NET: Registered PF_PACKET protocol family
Dec  4 04:34:52 np0005545273 kernel: mpls_gso: MPLS GSO support
Dec  4 04:34:52 np0005545273 kernel: IPI shorthand broadcast: enabled
Dec  4 04:34:52 np0005545273 kernel: AVX2 version of gcm_enc/dec engaged.
Dec  4 04:34:52 np0005545273 kernel: AES CTR mode by8 optimization enabled
Dec  4 04:34:52 np0005545273 kernel: sched_clock: Marking stable (1180001894, 151662798)->(1467886669, -136221977)
Dec  4 04:34:52 np0005545273 kernel: registered taskstats version 1
Dec  4 04:34:52 np0005545273 kernel: Loading compiled-in X.509 certificates
Dec  4 04:34:52 np0005545273 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec  4 04:34:52 np0005545273 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec  4 04:34:52 np0005545273 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec  4 04:34:52 np0005545273 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec  4 04:34:52 np0005545273 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec  4 04:34:52 np0005545273 kernel: Demotion targets for Node 0: null
Dec  4 04:34:52 np0005545273 kernel: page_owner is disabled
Dec  4 04:34:52 np0005545273 kernel: Key type .fscrypt registered
Dec  4 04:34:52 np0005545273 kernel: Key type fscrypt-provisioning registered
Dec  4 04:34:52 np0005545273 kernel: Key type big_key registered
Dec  4 04:34:52 np0005545273 kernel: Key type encrypted registered
Dec  4 04:34:52 np0005545273 kernel: ima: No TPM chip found, activating TPM-bypass!
Dec  4 04:34:52 np0005545273 kernel: Loading compiled-in module X.509 certificates
Dec  4 04:34:52 np0005545273 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec  4 04:34:52 np0005545273 kernel: ima: Allocated hash algorithm: sha256
Dec  4 04:34:52 np0005545273 kernel: ima: No architecture policies found
Dec  4 04:34:52 np0005545273 kernel: evm: Initialising EVM extended attributes:
Dec  4 04:34:52 np0005545273 kernel: evm: security.selinux
Dec  4 04:34:52 np0005545273 kernel: evm: security.SMACK64 (disabled)
Dec  4 04:34:52 np0005545273 kernel: evm: security.SMACK64EXEC (disabled)
Dec  4 04:34:52 np0005545273 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec  4 04:34:52 np0005545273 kernel: evm: security.SMACK64MMAP (disabled)
Dec  4 04:34:52 np0005545273 kernel: evm: security.apparmor (disabled)
Dec  4 04:34:52 np0005545273 kernel: evm: security.ima
Dec  4 04:34:52 np0005545273 kernel: evm: security.capability
Dec  4 04:34:52 np0005545273 kernel: evm: HMAC attrs: 0x1
Dec  4 04:34:52 np0005545273 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec  4 04:34:52 np0005545273 kernel: Running certificate verification RSA selftest
Dec  4 04:34:52 np0005545273 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec  4 04:34:52 np0005545273 kernel: Running certificate verification ECDSA selftest
Dec  4 04:34:52 np0005545273 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec  4 04:34:52 np0005545273 kernel: clk: Disabling unused clocks
Dec  4 04:34:52 np0005545273 kernel: Freeing unused decrypted memory: 2028K
Dec  4 04:34:52 np0005545273 kernel: Freeing unused kernel image (initmem) memory: 4196K
Dec  4 04:34:52 np0005545273 kernel: Write protecting the kernel read-only data: 30720k
Dec  4 04:34:52 np0005545273 kernel: Freeing unused kernel image (rodata/data gap) memory: 428K
Dec  4 04:34:52 np0005545273 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec  4 04:34:52 np0005545273 kernel: Run /init as init process
Dec  4 04:34:52 np0005545273 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec  4 04:34:52 np0005545273 systemd: Detected virtualization kvm.
Dec  4 04:34:52 np0005545273 systemd: Detected architecture x86-64.
Dec  4 04:34:52 np0005545273 systemd: Running in initrd.
Dec  4 04:34:52 np0005545273 systemd: No hostname configured, using default hostname.
Dec  4 04:34:52 np0005545273 systemd: Hostname set to <localhost>.
Dec  4 04:34:52 np0005545273 systemd: Initializing machine ID from VM UUID.
Dec  4 04:34:52 np0005545273 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec  4 04:34:52 np0005545273 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec  4 04:34:52 np0005545273 kernel: usb 1-1: Product: QEMU USB Tablet
Dec  4 04:34:52 np0005545273 kernel: usb 1-1: Manufacturer: QEMU
Dec  4 04:34:52 np0005545273 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec  4 04:34:52 np0005545273 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec  4 04:34:52 np0005545273 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec  4 04:34:52 np0005545273 systemd: Queued start job for default target Initrd Default Target.
Dec  4 04:34:52 np0005545273 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec  4 04:34:52 np0005545273 systemd: Reached target Local Encrypted Volumes.
Dec  4 04:34:52 np0005545273 systemd: Reached target Initrd /usr File System.
Dec  4 04:34:52 np0005545273 systemd: Reached target Local File Systems.
Dec  4 04:34:52 np0005545273 systemd: Reached target Path Units.
Dec  4 04:34:52 np0005545273 systemd: Reached target Slice Units.
Dec  4 04:34:52 np0005545273 systemd: Reached target Swaps.
Dec  4 04:34:52 np0005545273 systemd: Reached target Timer Units.
Dec  4 04:34:52 np0005545273 systemd: Listening on D-Bus System Message Bus Socket.
Dec  4 04:34:52 np0005545273 systemd: Listening on Journal Socket (/dev/log).
Dec  4 04:34:52 np0005545273 systemd: Listening on Journal Socket.
Dec  4 04:34:52 np0005545273 systemd: Listening on udev Control Socket.
Dec  4 04:34:52 np0005545273 systemd: Listening on udev Kernel Socket.
Dec  4 04:34:52 np0005545273 systemd: Reached target Socket Units.
Dec  4 04:34:52 np0005545273 systemd: Starting Create List of Static Device Nodes...
Dec  4 04:34:52 np0005545273 systemd: Starting Journal Service...
Dec  4 04:34:52 np0005545273 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec  4 04:34:52 np0005545273 systemd: Starting Apply Kernel Variables...
Dec  4 04:34:52 np0005545273 systemd: Starting Create System Users...
Dec  4 04:34:52 np0005545273 systemd: Starting Setup Virtual Console...
Dec  4 04:34:52 np0005545273 systemd: Finished Create List of Static Device Nodes.
Dec  4 04:34:52 np0005545273 systemd: Finished Apply Kernel Variables.
Dec  4 04:34:52 np0005545273 systemd-journald[310]: Journal started
Dec  4 04:34:52 np0005545273 systemd-journald[310]: Runtime Journal (/run/log/journal/1f0bfa2dc9224848973a776654e5dc59) is 8.0M, max 153.6M, 145.6M free.
Dec  4 04:34:52 np0005545273 systemd-sysusers[314]: Creating group 'users' with GID 100.
Dec  4 04:34:52 np0005545273 systemd: Started Journal Service.
Dec  4 04:34:52 np0005545273 systemd-sysusers[314]: Creating group 'dbus' with GID 81.
Dec  4 04:34:52 np0005545273 systemd-sysusers[314]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec  4 04:34:52 np0005545273 systemd[1]: Finished Create System Users.
Dec  4 04:34:52 np0005545273 systemd[1]: Starting Create Static Device Nodes in /dev...
Dec  4 04:34:52 np0005545273 systemd[1]: Starting Create Volatile Files and Directories...
Dec  4 04:34:52 np0005545273 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec  4 04:34:52 np0005545273 systemd[1]: Finished Setup Virtual Console.
Dec  4 04:34:52 np0005545273 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec  4 04:34:52 np0005545273 systemd[1]: Starting dracut cmdline hook...
Dec  4 04:34:52 np0005545273 systemd[1]: Finished Create Volatile Files and Directories.
Dec  4 04:34:52 np0005545273 dracut-cmdline[329]: dracut-9 dracut-057-102.git20250818.el9
Dec  4 04:34:52 np0005545273 dracut-cmdline[329]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  4 04:34:52 np0005545273 systemd[1]: Finished dracut cmdline hook.
Dec  4 04:34:52 np0005545273 systemd[1]: Starting dracut pre-udev hook...
Dec  4 04:34:52 np0005545273 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec  4 04:34:52 np0005545273 kernel: device-mapper: uevent: version 1.0.3
Dec  4 04:34:52 np0005545273 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec  4 04:34:52 np0005545273 kernel: RPC: Registered named UNIX socket transport module.
Dec  4 04:34:52 np0005545273 kernel: RPC: Registered udp transport module.
Dec  4 04:34:52 np0005545273 kernel: RPC: Registered tcp transport module.
Dec  4 04:34:52 np0005545273 kernel: RPC: Registered tcp-with-tls transport module.
Dec  4 04:34:52 np0005545273 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec  4 04:34:52 np0005545273 rpc.statd[446]: Version 2.5.4 starting
Dec  4 04:34:52 np0005545273 rpc.statd[446]: Initializing NSM state
Dec  4 04:34:52 np0005545273 rpc.idmapd[451]: Setting log level to 0
Dec  4 04:34:52 np0005545273 systemd[1]: Finished dracut pre-udev hook.
Dec  4 04:34:53 np0005545273 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec  4 04:34:53 np0005545273 systemd-udevd[464]: Using default interface naming scheme 'rhel-9.0'.
Dec  4 04:34:53 np0005545273 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec  4 04:34:53 np0005545273 systemd[1]: Starting dracut pre-trigger hook...
Dec  4 04:34:53 np0005545273 systemd[1]: Finished dracut pre-trigger hook.
Dec  4 04:34:53 np0005545273 systemd[1]: Starting Coldplug All udev Devices...
Dec  4 04:34:53 np0005545273 systemd[1]: Created slice Slice /system/modprobe.
Dec  4 04:34:53 np0005545273 systemd[1]: Starting Load Kernel Module configfs...
Dec  4 04:34:53 np0005545273 systemd[1]: Finished Coldplug All udev Devices.
Dec  4 04:34:53 np0005545273 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  4 04:34:53 np0005545273 systemd[1]: Finished Load Kernel Module configfs.
Dec  4 04:34:53 np0005545273 systemd[1]: Mounting Kernel Configuration File System...
Dec  4 04:34:53 np0005545273 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec  4 04:34:53 np0005545273 systemd[1]: Reached target Network.
Dec  4 04:34:53 np0005545273 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec  4 04:34:53 np0005545273 systemd[1]: Starting dracut initqueue hook...
Dec  4 04:34:53 np0005545273 systemd[1]: Mounted Kernel Configuration File System.
Dec  4 04:34:53 np0005545273 systemd[1]: Reached target System Initialization.
Dec  4 04:34:53 np0005545273 systemd[1]: Reached target Basic System.
Dec  4 04:34:53 np0005545273 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Dec  4 04:34:53 np0005545273 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec  4 04:34:53 np0005545273 systemd-udevd[502]: Network interface NamePolicy= disabled on kernel command line.
Dec  4 04:34:53 np0005545273 kernel: vda: vda1
Dec  4 04:34:53 np0005545273 kernel: scsi host0: ata_piix
Dec  4 04:34:53 np0005545273 kernel: scsi host1: ata_piix
Dec  4 04:34:53 np0005545273 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Dec  4 04:34:53 np0005545273 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Dec  4 04:34:53 np0005545273 systemd[1]: Found device /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec  4 04:34:53 np0005545273 systemd[1]: Reached target Initrd Root Device.
Dec  4 04:34:53 np0005545273 kernel: ata1: found unknown device (class 0)
Dec  4 04:34:53 np0005545273 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec  4 04:34:53 np0005545273 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec  4 04:34:53 np0005545273 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec  4 04:34:53 np0005545273 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec  4 04:34:53 np0005545273 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec  4 04:34:53 np0005545273 systemd[1]: Finished dracut initqueue hook.
Dec  4 04:34:53 np0005545273 systemd[1]: Reached target Preparation for Remote File Systems.
Dec  4 04:34:53 np0005545273 systemd[1]: Reached target Remote Encrypted Volumes.
Dec  4 04:34:53 np0005545273 systemd[1]: Reached target Remote File Systems.
Dec  4 04:34:53 np0005545273 systemd[1]: Starting dracut pre-mount hook...
Dec  4 04:34:53 np0005545273 systemd[1]: Finished dracut pre-mount hook.
Dec  4 04:34:53 np0005545273 systemd[1]: Starting File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f...
Dec  4 04:34:53 np0005545273 systemd-fsck[557]: /usr/sbin/fsck.xfs: XFS file system.
Dec  4 04:34:53 np0005545273 systemd[1]: Finished File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec  4 04:34:53 np0005545273 systemd[1]: Mounting /sysroot...
Dec  4 04:34:54 np0005545273 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec  4 04:34:54 np0005545273 kernel: XFS (vda1): Mounting V5 Filesystem fcf6b761-831a-48a7-9f5f-068b5063763f
Dec  4 04:34:54 np0005545273 kernel: XFS (vda1): Ending clean mount
Dec  4 04:34:54 np0005545273 systemd[1]: Mounted /sysroot.
Dec  4 04:34:54 np0005545273 systemd[1]: Reached target Initrd Root File System.
Dec  4 04:34:54 np0005545273 systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec  4 04:34:54 np0005545273 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec  4 04:34:54 np0005545273 systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec  4 04:34:54 np0005545273 systemd[1]: Reached target Initrd File Systems.
Dec  4 04:34:54 np0005545273 systemd[1]: Reached target Initrd Default Target.
Dec  4 04:34:54 np0005545273 systemd[1]: Starting dracut mount hook...
Dec  4 04:34:54 np0005545273 systemd[1]: Finished dracut mount hook.
Dec  4 04:34:54 np0005545273 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec  4 04:34:54 np0005545273 rpc.idmapd[451]: exiting on signal 15
Dec  4 04:34:54 np0005545273 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec  4 04:34:54 np0005545273 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec  4 04:34:54 np0005545273 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec  4 04:34:54 np0005545273 systemd[1]: Stopped target Network.
Dec  4 04:34:54 np0005545273 systemd[1]: Stopped target Remote Encrypted Volumes.
Dec  4 04:34:54 np0005545273 systemd[1]: Stopped target Timer Units.
Dec  4 04:34:54 np0005545273 systemd[1]: dbus.socket: Deactivated successfully.
Dec  4 04:34:54 np0005545273 systemd[1]: Closed D-Bus System Message Bus Socket.
Dec  4 04:34:54 np0005545273 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec  4 04:34:54 np0005545273 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec  4 04:34:54 np0005545273 systemd[1]: Stopped target Initrd Default Target.
Dec  4 04:34:54 np0005545273 systemd[1]: Stopped target Basic System.
Dec  4 04:34:54 np0005545273 systemd[1]: Stopped target Initrd Root Device.
Dec  4 04:34:54 np0005545273 systemd[1]: Stopped target Initrd /usr File System.
Dec  4 04:34:54 np0005545273 systemd[1]: Stopped target Path Units.
Dec  4 04:34:54 np0005545273 systemd[1]: Stopped target Remote File Systems.
Dec  4 04:34:54 np0005545273 systemd[1]: Stopped target Preparation for Remote File Systems.
Dec  4 04:34:54 np0005545273 systemd[1]: Stopped target Slice Units.
Dec  4 04:34:54 np0005545273 systemd[1]: Stopped target Socket Units.
Dec  4 04:34:54 np0005545273 systemd[1]: Stopped target System Initialization.
Dec  4 04:34:54 np0005545273 systemd[1]: Stopped target Local File Systems.
Dec  4 04:34:54 np0005545273 systemd[1]: Stopped target Swaps.
Dec  4 04:34:54 np0005545273 systemd[1]: dracut-mount.service: Deactivated successfully.
Dec  4 04:34:54 np0005545273 systemd[1]: Stopped dracut mount hook.
Dec  4 04:34:54 np0005545273 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec  4 04:34:54 np0005545273 systemd[1]: Stopped dracut pre-mount hook.
Dec  4 04:34:54 np0005545273 systemd[1]: Stopped target Local Encrypted Volumes.
Dec  4 04:34:54 np0005545273 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec  4 04:34:54 np0005545273 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec  4 04:34:54 np0005545273 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec  4 04:34:54 np0005545273 systemd[1]: Stopped dracut initqueue hook.
Dec  4 04:34:54 np0005545273 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec  4 04:34:54 np0005545273 systemd[1]: Stopped Apply Kernel Variables.
Dec  4 04:34:54 np0005545273 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec  4 04:34:54 np0005545273 systemd[1]: Stopped Create Volatile Files and Directories.
Dec  4 04:34:54 np0005545273 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec  4 04:34:54 np0005545273 systemd[1]: Stopped Coldplug All udev Devices.
Dec  4 04:34:54 np0005545273 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec  4 04:34:54 np0005545273 systemd[1]: Stopped dracut pre-trigger hook.
Dec  4 04:34:54 np0005545273 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec  4 04:34:54 np0005545273 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec  4 04:34:54 np0005545273 systemd[1]: Stopped Setup Virtual Console.
Dec  4 04:34:54 np0005545273 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec  4 04:34:54 np0005545273 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec  4 04:34:54 np0005545273 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec  4 04:34:54 np0005545273 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec  4 04:34:54 np0005545273 systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec  4 04:34:54 np0005545273 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec  4 04:34:54 np0005545273 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec  4 04:34:54 np0005545273 systemd[1]: Closed udev Control Socket.
Dec  4 04:34:54 np0005545273 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec  4 04:34:54 np0005545273 systemd[1]: Closed udev Kernel Socket.
Dec  4 04:34:54 np0005545273 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec  4 04:34:54 np0005545273 systemd[1]: Stopped dracut pre-udev hook.
Dec  4 04:34:54 np0005545273 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec  4 04:34:54 np0005545273 systemd[1]: Stopped dracut cmdline hook.
Dec  4 04:34:54 np0005545273 systemd[1]: Starting Cleanup udev Database...
Dec  4 04:34:54 np0005545273 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec  4 04:34:54 np0005545273 systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec  4 04:34:54 np0005545273 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec  4 04:34:54 np0005545273 systemd[1]: Stopped Create List of Static Device Nodes.
Dec  4 04:34:54 np0005545273 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec  4 04:34:54 np0005545273 systemd[1]: Stopped Create System Users.
Dec  4 04:34:54 np0005545273 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec  4 04:34:54 np0005545273 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Dec  4 04:34:54 np0005545273 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec  4 04:34:54 np0005545273 systemd[1]: Finished Cleanup udev Database.
Dec  4 04:34:54 np0005545273 systemd[1]: Reached target Switch Root.
Dec  4 04:34:54 np0005545273 systemd[1]: Starting Switch Root...
Dec  4 04:34:54 np0005545273 systemd[1]: Switching root.
Dec  4 04:34:54 np0005545273 systemd-journald[310]: Received SIGTERM from PID 1 (systemd).
Dec  4 04:34:54 np0005545273 systemd-journald[310]: Journal stopped
Dec  4 04:34:55 np0005545273 kernel: audit: type=1404 audit(1764840894.708:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec  4 04:34:55 np0005545273 kernel: SELinux:  policy capability network_peer_controls=1
Dec  4 04:34:55 np0005545273 kernel: SELinux:  policy capability open_perms=1
Dec  4 04:34:55 np0005545273 kernel: SELinux:  policy capability extended_socket_class=1
Dec  4 04:34:55 np0005545273 kernel: SELinux:  policy capability always_check_network=0
Dec  4 04:34:55 np0005545273 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  4 04:34:55 np0005545273 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  4 04:34:55 np0005545273 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  4 04:34:55 np0005545273 kernel: audit: type=1403 audit(1764840894.838:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec  4 04:34:55 np0005545273 systemd: Successfully loaded SELinux policy in 132.259ms.
Dec  4 04:34:55 np0005545273 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.146ms.
Dec  4 04:34:55 np0005545273 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec  4 04:34:55 np0005545273 systemd: Detected virtualization kvm.
Dec  4 04:34:55 np0005545273 systemd: Detected architecture x86-64.
Dec  4 04:34:55 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 04:34:55 np0005545273 systemd: initrd-switch-root.service: Deactivated successfully.
Dec  4 04:34:55 np0005545273 systemd: Stopped Switch Root.
Dec  4 04:34:55 np0005545273 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec  4 04:34:55 np0005545273 systemd: Created slice Slice /system/getty.
Dec  4 04:34:55 np0005545273 systemd: Created slice Slice /system/serial-getty.
Dec  4 04:34:55 np0005545273 systemd: Created slice Slice /system/sshd-keygen.
Dec  4 04:34:55 np0005545273 systemd: Created slice User and Session Slice.
Dec  4 04:34:55 np0005545273 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec  4 04:34:55 np0005545273 systemd: Started Forward Password Requests to Wall Directory Watch.
Dec  4 04:34:55 np0005545273 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec  4 04:34:55 np0005545273 systemd: Reached target Local Encrypted Volumes.
Dec  4 04:34:55 np0005545273 systemd: Stopped target Switch Root.
Dec  4 04:34:55 np0005545273 systemd: Stopped target Initrd File Systems.
Dec  4 04:34:55 np0005545273 systemd: Stopped target Initrd Root File System.
Dec  4 04:34:55 np0005545273 systemd: Reached target Local Integrity Protected Volumes.
Dec  4 04:34:55 np0005545273 systemd: Reached target Path Units.
Dec  4 04:34:55 np0005545273 systemd: Reached target rpc_pipefs.target.
Dec  4 04:34:55 np0005545273 systemd: Reached target Slice Units.
Dec  4 04:34:55 np0005545273 systemd: Reached target Swaps.
Dec  4 04:34:55 np0005545273 systemd: Reached target Local Verity Protected Volumes.
Dec  4 04:34:55 np0005545273 systemd: Listening on RPCbind Server Activation Socket.
Dec  4 04:34:55 np0005545273 systemd: Reached target RPC Port Mapper.
Dec  4 04:34:55 np0005545273 systemd: Listening on Process Core Dump Socket.
Dec  4 04:34:55 np0005545273 systemd: Listening on initctl Compatibility Named Pipe.
Dec  4 04:34:55 np0005545273 systemd: Listening on udev Control Socket.
Dec  4 04:34:55 np0005545273 systemd: Listening on udev Kernel Socket.
Dec  4 04:34:55 np0005545273 systemd: Mounting Huge Pages File System...
Dec  4 04:34:55 np0005545273 systemd: Mounting POSIX Message Queue File System...
Dec  4 04:34:55 np0005545273 systemd: Mounting Kernel Debug File System...
Dec  4 04:34:55 np0005545273 systemd: Mounting Kernel Trace File System...
Dec  4 04:34:55 np0005545273 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec  4 04:34:55 np0005545273 systemd: Starting Create List of Static Device Nodes...
Dec  4 04:34:55 np0005545273 systemd: Starting Load Kernel Module configfs...
Dec  4 04:34:55 np0005545273 systemd: Starting Load Kernel Module drm...
Dec  4 04:34:55 np0005545273 systemd: Starting Load Kernel Module efi_pstore...
Dec  4 04:34:55 np0005545273 systemd: Starting Load Kernel Module fuse...
Dec  4 04:34:55 np0005545273 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec  4 04:34:55 np0005545273 systemd: systemd-fsck-root.service: Deactivated successfully.
Dec  4 04:34:55 np0005545273 systemd: Stopped File System Check on Root Device.
Dec  4 04:34:55 np0005545273 systemd: Stopped Journal Service.
Dec  4 04:34:55 np0005545273 systemd: Starting Journal Service...
Dec  4 04:34:55 np0005545273 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec  4 04:34:55 np0005545273 systemd: Starting Generate network units from Kernel command line...
Dec  4 04:34:55 np0005545273 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  4 04:34:55 np0005545273 systemd: Starting Remount Root and Kernel File Systems...
Dec  4 04:34:55 np0005545273 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec  4 04:34:55 np0005545273 systemd: Starting Apply Kernel Variables...
Dec  4 04:34:55 np0005545273 kernel: fuse: init (API version 7.37)
Dec  4 04:34:55 np0005545273 systemd: Starting Coldplug All udev Devices...
Dec  4 04:34:55 np0005545273 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec  4 04:34:55 np0005545273 systemd: Mounted Huge Pages File System.
Dec  4 04:34:55 np0005545273 systemd: Mounted POSIX Message Queue File System.
Dec  4 04:34:55 np0005545273 systemd: Mounted Kernel Debug File System.
Dec  4 04:34:55 np0005545273 systemd: Mounted Kernel Trace File System.
Dec  4 04:34:55 np0005545273 systemd-journald[680]: Journal started
Dec  4 04:34:55 np0005545273 systemd-journald[680]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec  4 04:34:55 np0005545273 systemd[1]: Queued start job for default target Multi-User System.
Dec  4 04:34:55 np0005545273 systemd[1]: systemd-journald.service: Deactivated successfully.
Dec  4 04:34:55 np0005545273 systemd: Started Journal Service.
Dec  4 04:34:55 np0005545273 systemd[1]: Finished Create List of Static Device Nodes.
Dec  4 04:34:55 np0005545273 kernel: ACPI: bus type drm_connector registered
Dec  4 04:34:55 np0005545273 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  4 04:34:55 np0005545273 systemd[1]: Finished Load Kernel Module configfs.
Dec  4 04:34:55 np0005545273 systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec  4 04:34:55 np0005545273 systemd[1]: Finished Load Kernel Module drm.
Dec  4 04:34:55 np0005545273 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec  4 04:34:55 np0005545273 systemd[1]: Finished Load Kernel Module efi_pstore.
Dec  4 04:34:55 np0005545273 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec  4 04:34:55 np0005545273 systemd[1]: Finished Load Kernel Module fuse.
Dec  4 04:34:55 np0005545273 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec  4 04:34:55 np0005545273 systemd[1]: Finished Generate network units from Kernel command line.
Dec  4 04:34:55 np0005545273 systemd[1]: Finished Remount Root and Kernel File Systems.
Dec  4 04:34:55 np0005545273 systemd[1]: Finished Apply Kernel Variables.
Dec  4 04:34:55 np0005545273 systemd[1]: Mounting FUSE Control File System...
Dec  4 04:34:55 np0005545273 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec  4 04:34:55 np0005545273 systemd[1]: Starting Rebuild Hardware Database...
Dec  4 04:34:55 np0005545273 systemd[1]: Starting Flush Journal to Persistent Storage...
Dec  4 04:34:55 np0005545273 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec  4 04:34:55 np0005545273 systemd[1]: Starting Load/Save OS Random Seed...
Dec  4 04:34:55 np0005545273 systemd[1]: Starting Create System Users...
Dec  4 04:34:55 np0005545273 systemd-journald[680]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec  4 04:34:55 np0005545273 systemd[1]: Mounted FUSE Control File System.
Dec  4 04:34:55 np0005545273 systemd-journald[680]: Received client request to flush runtime journal.
Dec  4 04:34:55 np0005545273 systemd[1]: Finished Flush Journal to Persistent Storage.
Dec  4 04:34:55 np0005545273 systemd[1]: Finished Load/Save OS Random Seed.
Dec  4 04:34:55 np0005545273 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec  4 04:34:55 np0005545273 systemd[1]: Finished Coldplug All udev Devices.
Dec  4 04:34:55 np0005545273 systemd[1]: Finished Create System Users.
Dec  4 04:34:55 np0005545273 systemd[1]: Starting Create Static Device Nodes in /dev...
Dec  4 04:34:55 np0005545273 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec  4 04:34:55 np0005545273 systemd[1]: Reached target Preparation for Local File Systems.
Dec  4 04:34:55 np0005545273 systemd[1]: Reached target Local File Systems.
Dec  4 04:34:55 np0005545273 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec  4 04:34:55 np0005545273 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec  4 04:34:55 np0005545273 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec  4 04:34:55 np0005545273 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Dec  4 04:34:55 np0005545273 systemd[1]: Starting Automatic Boot Loader Update...
Dec  4 04:34:55 np0005545273 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec  4 04:34:55 np0005545273 systemd[1]: Starting Create Volatile Files and Directories...
Dec  4 04:34:55 np0005545273 bootctl[699]: Couldn't find EFI system partition, skipping.
Dec  4 04:34:55 np0005545273 systemd[1]: Finished Automatic Boot Loader Update.
Dec  4 04:34:55 np0005545273 systemd[1]: Finished Create Volatile Files and Directories.
Dec  4 04:34:55 np0005545273 systemd[1]: Starting Security Auditing Service...
Dec  4 04:34:55 np0005545273 systemd[1]: Starting RPC Bind...
Dec  4 04:34:55 np0005545273 systemd[1]: Starting Rebuild Journal Catalog...
Dec  4 04:34:55 np0005545273 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec  4 04:34:55 np0005545273 auditd[705]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Dec  4 04:34:55 np0005545273 auditd[705]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Dec  4 04:34:55 np0005545273 systemd[1]: Started RPC Bind.
Dec  4 04:34:55 np0005545273 systemd[1]: Finished Rebuild Journal Catalog.
Dec  4 04:34:55 np0005545273 augenrules[710]: /sbin/augenrules: No change
Dec  4 04:34:55 np0005545273 augenrules[725]: No rules
Dec  4 04:34:55 np0005545273 augenrules[725]: enabled 1
Dec  4 04:34:55 np0005545273 augenrules[725]: failure 1
Dec  4 04:34:55 np0005545273 augenrules[725]: pid 705
Dec  4 04:34:55 np0005545273 augenrules[725]: rate_limit 0
Dec  4 04:34:55 np0005545273 augenrules[725]: backlog_limit 8192
Dec  4 04:34:55 np0005545273 augenrules[725]: lost 0
Dec  4 04:34:55 np0005545273 augenrules[725]: backlog 0
Dec  4 04:34:55 np0005545273 augenrules[725]: backlog_wait_time 60000
Dec  4 04:34:55 np0005545273 augenrules[725]: backlog_wait_time_actual 0
Dec  4 04:34:55 np0005545273 augenrules[725]: enabled 1
Dec  4 04:34:55 np0005545273 augenrules[725]: failure 1
Dec  4 04:34:55 np0005545273 augenrules[725]: pid 705
Dec  4 04:34:55 np0005545273 augenrules[725]: rate_limit 0
Dec  4 04:34:55 np0005545273 augenrules[725]: backlog_limit 8192
Dec  4 04:34:55 np0005545273 augenrules[725]: lost 0
Dec  4 04:34:55 np0005545273 augenrules[725]: backlog 2
Dec  4 04:34:55 np0005545273 augenrules[725]: backlog_wait_time 60000
Dec  4 04:34:55 np0005545273 augenrules[725]: backlog_wait_time_actual 0
Dec  4 04:34:55 np0005545273 augenrules[725]: enabled 1
Dec  4 04:34:55 np0005545273 augenrules[725]: failure 1
Dec  4 04:34:55 np0005545273 augenrules[725]: pid 705
Dec  4 04:34:55 np0005545273 augenrules[725]: rate_limit 0
Dec  4 04:34:55 np0005545273 augenrules[725]: backlog_limit 8192
Dec  4 04:34:55 np0005545273 augenrules[725]: lost 0
Dec  4 04:34:55 np0005545273 augenrules[725]: backlog 2
Dec  4 04:34:55 np0005545273 augenrules[725]: backlog_wait_time 60000
Dec  4 04:34:55 np0005545273 augenrules[725]: backlog_wait_time_actual 0
Dec  4 04:34:55 np0005545273 systemd[1]: Started Security Auditing Service.
Dec  4 04:34:55 np0005545273 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec  4 04:34:55 np0005545273 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec  4 04:34:56 np0005545273 systemd[1]: Finished Rebuild Hardware Database.
Dec  4 04:34:56 np0005545273 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec  4 04:34:56 np0005545273 systemd[1]: Starting Update is Completed...
Dec  4 04:34:56 np0005545273 systemd[1]: Finished Update is Completed.
Dec  4 04:34:56 np0005545273 systemd-udevd[733]: Using default interface naming scheme 'rhel-9.0'.
Dec  4 04:34:56 np0005545273 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec  4 04:34:56 np0005545273 systemd[1]: Reached target System Initialization.
Dec  4 04:34:56 np0005545273 systemd[1]: Started dnf makecache --timer.
Dec  4 04:34:56 np0005545273 systemd[1]: Started Daily rotation of log files.
Dec  4 04:34:56 np0005545273 systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec  4 04:34:56 np0005545273 systemd[1]: Reached target Timer Units.
Dec  4 04:34:56 np0005545273 systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec  4 04:34:56 np0005545273 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec  4 04:34:56 np0005545273 systemd[1]: Reached target Socket Units.
Dec  4 04:34:56 np0005545273 systemd[1]: Starting D-Bus System Message Bus...
Dec  4 04:34:56 np0005545273 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  4 04:34:56 np0005545273 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec  4 04:34:56 np0005545273 systemd[1]: Starting Load Kernel Module configfs...
Dec  4 04:34:56 np0005545273 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  4 04:34:56 np0005545273 systemd[1]: Finished Load Kernel Module configfs.
Dec  4 04:34:56 np0005545273 systemd-udevd[739]: Network interface NamePolicy= disabled on kernel command line.
Dec  4 04:34:56 np0005545273 systemd[1]: Started D-Bus System Message Bus.
Dec  4 04:34:56 np0005545273 systemd[1]: Reached target Basic System.
Dec  4 04:34:56 np0005545273 dbus-broker-lau[758]: Ready
Dec  4 04:34:56 np0005545273 systemd[1]: Starting NTP client/server...
Dec  4 04:34:56 np0005545273 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Dec  4 04:34:56 np0005545273 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec  4 04:34:56 np0005545273 systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec  4 04:34:56 np0005545273 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec  4 04:34:56 np0005545273 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec  4 04:34:56 np0005545273 systemd[1]: Starting IPv4 firewall with iptables...
Dec  4 04:34:56 np0005545273 chronyd[791]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec  4 04:34:56 np0005545273 chronyd[791]: Loaded 0 symmetric keys
Dec  4 04:34:56 np0005545273 chronyd[791]: Using right/UTC timezone to obtain leap second data
Dec  4 04:34:56 np0005545273 chronyd[791]: Loaded seccomp filter (level 2)
Dec  4 04:34:56 np0005545273 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec  4 04:34:56 np0005545273 systemd[1]: Started irqbalance daemon.
Dec  4 04:34:56 np0005545273 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec  4 04:34:56 np0005545273 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  4 04:34:56 np0005545273 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  4 04:34:56 np0005545273 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  4 04:34:56 np0005545273 systemd[1]: Reached target sshd-keygen.target.
Dec  4 04:34:56 np0005545273 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec  4 04:34:56 np0005545273 systemd[1]: Reached target User and Group Name Lookups.
Dec  4 04:34:56 np0005545273 systemd[1]: Starting User Login Management...
Dec  4 04:34:56 np0005545273 systemd[1]: Started NTP client/server.
Dec  4 04:34:56 np0005545273 systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec  4 04:34:56 np0005545273 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec  4 04:34:56 np0005545273 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec  4 04:34:56 np0005545273 kernel: Console: switching to colour dummy device 80x25
Dec  4 04:34:56 np0005545273 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec  4 04:34:56 np0005545273 kernel: [drm] features: -context_init
Dec  4 04:34:56 np0005545273 kernel: [drm] number of scanouts: 1
Dec  4 04:34:56 np0005545273 kernel: [drm] number of cap sets: 0
Dec  4 04:34:56 np0005545273 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Dec  4 04:34:56 np0005545273 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec  4 04:34:56 np0005545273 kernel: Console: switching to colour frame buffer device 128x48
Dec  4 04:34:56 np0005545273 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec  4 04:34:56 np0005545273 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec  4 04:34:56 np0005545273 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Dec  4 04:34:56 np0005545273 kernel: kvm_amd: TSC scaling supported
Dec  4 04:34:56 np0005545273 kernel: kvm_amd: Nested Virtualization enabled
Dec  4 04:34:56 np0005545273 kernel: kvm_amd: Nested Paging enabled
Dec  4 04:34:56 np0005545273 kernel: kvm_amd: LBR virtualization supported
Dec  4 04:34:56 np0005545273 systemd-logind[798]: New seat seat0.
Dec  4 04:34:56 np0005545273 systemd-logind[798]: Watching system buttons on /dev/input/event0 (Power Button)
Dec  4 04:34:56 np0005545273 systemd-logind[798]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec  4 04:34:56 np0005545273 systemd[1]: Started User Login Management.
Dec  4 04:34:56 np0005545273 iptables.init[785]: iptables: Applying firewall rules: [  OK  ]
Dec  4 04:34:56 np0005545273 systemd[1]: Finished IPv4 firewall with iptables.
Dec  4 04:34:56 np0005545273 cloud-init[842]: Cloud-init v. 24.4-7.el9 running 'init-local' at Thu, 04 Dec 2025 09:34:56 +0000. Up 6.32 seconds.
Dec  4 04:34:56 np0005545273 systemd[1]: run-cloud\x2dinit-tmp-tmpfjn80l8m.mount: Deactivated successfully.
Dec  4 04:34:56 np0005545273 systemd[1]: Starting Hostname Service...
Dec  4 04:34:57 np0005545273 systemd[1]: Started Hostname Service.
Dec  4 04:34:57 np0005545273 systemd-hostnamed[856]: Hostname set to <np0005545273.novalocal> (static)
Dec  4 04:34:57 np0005545273 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Dec  4 04:34:57 np0005545273 systemd[1]: Reached target Preparation for Network.
Dec  4 04:34:57 np0005545273 systemd[1]: Starting Network Manager...
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2110] NetworkManager (version 1.54.1-1.el9) is starting... (boot:df4fb9d0-81a4-4e5e-8b88-c0920d7ba5e9)
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2115] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2197] manager[0x55a174985080]: monitoring kernel firmware directory '/lib/firmware'.
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2235] hostname: hostname: using hostnamed
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2235] hostname: static hostname changed from (none) to "np0005545273.novalocal"
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2239] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2370] manager[0x55a174985080]: rfkill: Wi-Fi hardware radio set enabled
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2371] manager[0x55a174985080]: rfkill: WWAN hardware radio set enabled
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2423] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2424] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2425] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2426] manager: Networking is enabled by state file
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2428] settings: Loaded settings plugin: keyfile (internal)
Dec  4 04:34:57 np0005545273 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2439] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2467] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2483] dhcp: init: Using DHCP client 'internal'
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2486] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2507] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2517] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2527] device (lo): Activation: starting connection 'lo' (3cd632aa-e4f7-4e63-bb4d-c1d9ec185b32)
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2538] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2542] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2577] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2583] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2585] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2587] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2590] device (eth0): carrier: link connected
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2594] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2600] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2605] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2609] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2611] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2613] manager: NetworkManager state is now CONNECTING
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2614] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2621] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2624] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2677] dhcp4 (eth0): state changed new lease, address=38.102.83.169
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2684] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2702] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  4 04:34:57 np0005545273 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  4 04:34:57 np0005545273 systemd[1]: Started Network Manager.
Dec  4 04:34:57 np0005545273 systemd[1]: Reached target Network.
Dec  4 04:34:57 np0005545273 systemd[1]: Starting Network Manager Wait Online...
Dec  4 04:34:57 np0005545273 systemd[1]: Starting GSSAPI Proxy Daemon...
Dec  4 04:34:57 np0005545273 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2973] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2981] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.2983] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.3001] device (lo): Activation: successful, device activated.
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.3017] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.3023] manager: NetworkManager state is now CONNECTED_SITE
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.3030] device (eth0): Activation: successful, device activated.
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.3037] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  4 04:34:57 np0005545273 NetworkManager[860]: <info>  [1764840897.3040] manager: startup complete
Dec  4 04:34:57 np0005545273 systemd[1]: Started GSSAPI Proxy Daemon.
Dec  4 04:34:57 np0005545273 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec  4 04:34:57 np0005545273 systemd[1]: Reached target NFS client services.
Dec  4 04:34:57 np0005545273 systemd[1]: Reached target Preparation for Remote File Systems.
Dec  4 04:34:57 np0005545273 systemd[1]: Reached target Remote File Systems.
Dec  4 04:34:57 np0005545273 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  4 04:34:57 np0005545273 systemd[1]: Finished Network Manager Wait Online.
Dec  4 04:34:57 np0005545273 systemd[1]: Starting Cloud-init: Network Stage...
Dec  4 04:34:57 np0005545273 cloud-init[924]: Cloud-init v. 24.4-7.el9 running 'init' at Thu, 04 Dec 2025 09:34:57 +0000. Up 7.24 seconds.
Dec  4 04:34:57 np0005545273 cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec  4 04:34:57 np0005545273 cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  4 04:34:57 np0005545273 cloud-init[924]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Dec  4 04:34:57 np0005545273 cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  4 04:34:57 np0005545273 cloud-init[924]: ci-info: |  eth0  | True |        38.102.83.169         | 255.255.255.0 | global | fa:16:3e:e2:26:53 |
Dec  4 04:34:57 np0005545273 cloud-init[924]: ci-info: |  eth0  | True | fe80::f816:3eff:fee2:2653/64 |       .       |  link  | fa:16:3e:e2:26:53 |
Dec  4 04:34:57 np0005545273 cloud-init[924]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Dec  4 04:34:57 np0005545273 cloud-init[924]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Dec  4 04:34:57 np0005545273 cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  4 04:34:57 np0005545273 cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Dec  4 04:34:57 np0005545273 cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  4 04:34:57 np0005545273 cloud-init[924]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Dec  4 04:34:57 np0005545273 cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  4 04:34:57 np0005545273 cloud-init[924]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Dec  4 04:34:57 np0005545273 cloud-init[924]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Dec  4 04:34:57 np0005545273 cloud-init[924]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Dec  4 04:34:57 np0005545273 cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  4 04:34:57 np0005545273 cloud-init[924]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec  4 04:34:57 np0005545273 cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  4 04:34:57 np0005545273 cloud-init[924]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec  4 04:34:57 np0005545273 cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  4 04:34:57 np0005545273 cloud-init[924]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec  4 04:34:57 np0005545273 cloud-init[924]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Dec  4 04:34:57 np0005545273 cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  4 04:35:00 np0005545273 cloud-init[924]: Generating public/private rsa key pair.
Dec  4 04:35:00 np0005545273 cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec  4 04:35:00 np0005545273 cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec  4 04:35:00 np0005545273 cloud-init[924]: The key fingerprint is:
Dec  4 04:35:00 np0005545273 cloud-init[924]: SHA256:SOKl5YOFJg3y4xP1gD+6MIF1nzlXlosXII9rK9Wj6X4 root@np0005545273.novalocal
Dec  4 04:35:00 np0005545273 cloud-init[924]: The key's randomart image is:
Dec  4 04:35:00 np0005545273 cloud-init[924]: +---[RSA 3072]----+
Dec  4 04:35:00 np0005545273 cloud-init[924]: |. ..o . .. .     |
Dec  4 04:35:00 np0005545273 cloud-init[924]: | oo+.+ +  =      |
Dec  4 04:35:00 np0005545273 cloud-init[924]: |..=o=.Bo.+ o     |
Dec  4 04:35:00 np0005545273 cloud-init[924]: |o. *oX=+o o      |
Dec  4 04:35:00 np0005545273 cloud-init[924]: | .o.+.BoS.       |
Dec  4 04:35:00 np0005545273 cloud-init[924]: |o .. o = .       |
Dec  4 04:35:00 np0005545273 cloud-init[924]: | o .. +          |
Dec  4 04:35:00 np0005545273 cloud-init[924]: |  .  o  E        |
Dec  4 04:35:00 np0005545273 cloud-init[924]: |     .o.         |
Dec  4 04:35:00 np0005545273 cloud-init[924]: +----[SHA256]-----+
Dec  4 04:35:00 np0005545273 cloud-init[924]: Generating public/private ecdsa key pair.
Dec  4 04:35:00 np0005545273 cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec  4 04:35:00 np0005545273 cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec  4 04:35:00 np0005545273 cloud-init[924]: The key fingerprint is:
Dec  4 04:35:00 np0005545273 cloud-init[924]: SHA256:8PK4bSMwPSMuUx/IPwPb3Q2p/4RD04yiV98AOF5c+48 root@np0005545273.novalocal
Dec  4 04:35:00 np0005545273 cloud-init[924]: The key's randomart image is:
Dec  4 04:35:00 np0005545273 cloud-init[924]: +---[ECDSA 256]---+
Dec  4 04:35:00 np0005545273 cloud-init[924]: |            .    |
Dec  4 04:35:00 np0005545273 cloud-init[924]: |         o . .   |
Dec  4 04:35:00 np0005545273 cloud-init[924]: |      . o + .    |
Dec  4 04:35:00 np0005545273 cloud-init[924]: |       + o = .   |
Dec  4 04:35:00 np0005545273 cloud-init[924]: |   . o. S =.+ .  |
Dec  4 04:35:00 np0005545273 cloud-init[924]: |    O == +o+ o o |
Dec  4 04:35:00 np0005545273 cloud-init[924]: |   o Xo=ooooo E .|
Dec  4 04:35:00 np0005545273 cloud-init[924]: |  o o B+= .o.    |
Dec  4 04:35:00 np0005545273 cloud-init[924]: |   o  .=.o...    |
Dec  4 04:35:00 np0005545273 cloud-init[924]: +----[SHA256]-----+
Dec  4 04:35:00 np0005545273 cloud-init[924]: Generating public/private ed25519 key pair.
Dec  4 04:35:00 np0005545273 cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec  4 04:35:00 np0005545273 cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec  4 04:35:00 np0005545273 cloud-init[924]: The key fingerprint is:
Dec  4 04:35:00 np0005545273 cloud-init[924]: SHA256:Kw1KxzD3CXgyiFiocLbwAs57iba4815XYGhFDz1z1k8 root@np0005545273.novalocal
Dec  4 04:35:00 np0005545273 cloud-init[924]: The key's randomart image is:
Dec  4 04:35:00 np0005545273 cloud-init[924]: +--[ED25519 256]--+
Dec  4 04:35:00 np0005545273 cloud-init[924]: | ..  .+.   .     |
Dec  4 04:35:00 np0005545273 cloud-init[924]: |*oo. + o+ o . E  |
Dec  4 04:35:00 np0005545273 cloud-init[924]: |O=..O * .=   o   |
Dec  4 04:35:00 np0005545273 cloud-init[924]: |ooo. X + .    .  |
Dec  4 04:35:00 np0005545273 cloud-init[924]: | .o o + S        |
Dec  4 04:35:00 np0005545273 cloud-init[924]: | + + o + .       |
Dec  4 04:35:00 np0005545273 cloud-init[924]: |o o o o o        |
Dec  4 04:35:00 np0005545273 cloud-init[924]: |o. . . .         |
Dec  4 04:35:00 np0005545273 cloud-init[924]: |.=o              |
Dec  4 04:35:00 np0005545273 cloud-init[924]: +----[SHA256]-----+
Dec  4 04:35:00 np0005545273 systemd[1]: Finished Cloud-init: Network Stage.
Dec  4 04:35:00 np0005545273 systemd[1]: Reached target Cloud-config availability.
Dec  4 04:35:00 np0005545273 systemd[1]: Reached target Network is Online.
Dec  4 04:35:00 np0005545273 systemd[1]: Starting Cloud-init: Config Stage...
Dec  4 04:35:00 np0005545273 systemd[1]: Starting Crash recovery kernel arming...
Dec  4 04:35:00 np0005545273 systemd[1]: Starting Notify NFS peers of a restart...
Dec  4 04:35:00 np0005545273 systemd[1]: Starting System Logging Service...
Dec  4 04:35:00 np0005545273 systemd[1]: Starting OpenSSH server daemon...
Dec  4 04:35:00 np0005545273 sm-notify[1006]: Version 2.5.4 starting
Dec  4 04:35:00 np0005545273 systemd[1]: Starting Permit User Sessions...
Dec  4 04:35:00 np0005545273 systemd[1]: Started Notify NFS peers of a restart.
Dec  4 04:35:00 np0005545273 systemd[1]: Finished Permit User Sessions.
Dec  4 04:35:00 np0005545273 rsyslogd[1007]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1007" x-info="https://www.rsyslog.com"] start
Dec  4 04:35:00 np0005545273 rsyslogd[1007]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Dec  4 04:35:00 np0005545273 systemd[1]: Started Command Scheduler.
Dec  4 04:35:00 np0005545273 systemd[1]: Started Getty on tty1.
Dec  4 04:35:00 np0005545273 systemd[1]: Started Serial Getty on ttyS0.
Dec  4 04:35:00 np0005545273 systemd[1]: Reached target Login Prompts.
Dec  4 04:35:00 np0005545273 systemd[1]: Started OpenSSH server daemon.
Dec  4 04:35:00 np0005545273 systemd[1]: Started System Logging Service.
Dec  4 04:35:00 np0005545273 systemd[1]: Reached target Multi-User System.
Dec  4 04:35:00 np0005545273 systemd[1]: Starting Record Runlevel Change in UTMP...
Dec  4 04:35:00 np0005545273 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec  4 04:35:00 np0005545273 systemd[1]: Finished Record Runlevel Change in UTMP.
Dec  4 04:35:00 np0005545273 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  4 04:35:00 np0005545273 kdumpctl[1014]: kdump: No kdump initial ramdisk found.
Dec  4 04:35:00 np0005545273 kdumpctl[1014]: kdump: Rebuilding /boot/initramfs-5.14.0-645.el9.x86_64kdump.img
Dec  4 04:35:01 np0005545273 cloud-init[1146]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Thu, 04 Dec 2025 09:35:00 +0000. Up 10.63 seconds.
Dec  4 04:35:01 np0005545273 systemd[1]: Finished Cloud-init: Config Stage.
Dec  4 04:35:01 np0005545273 systemd[1]: Starting Cloud-init: Final Stage...
Dec  4 04:35:01 np0005545273 dracut[1267]: dracut-057-102.git20250818.el9
Dec  4 04:35:01 np0005545273 cloud-init[1285]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Thu, 04 Dec 2025 09:35:01 +0000. Up 11.07 seconds.
Dec  4 04:35:01 np0005545273 dracut[1269]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-645.el9.x86_64kdump.img 5.14.0-645.el9.x86_64
Dec  4 04:35:01 np0005545273 cloud-init[1315]: #############################################################
Dec  4 04:35:01 np0005545273 cloud-init[1318]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec  4 04:35:01 np0005545273 cloud-init[1328]: 256 SHA256:8PK4bSMwPSMuUx/IPwPb3Q2p/4RD04yiV98AOF5c+48 root@np0005545273.novalocal (ECDSA)
Dec  4 04:35:01 np0005545273 cloud-init[1339]: 256 SHA256:Kw1KxzD3CXgyiFiocLbwAs57iba4815XYGhFDz1z1k8 root@np0005545273.novalocal (ED25519)
Dec  4 04:35:01 np0005545273 cloud-init[1349]: 3072 SHA256:SOKl5YOFJg3y4xP1gD+6MIF1nzlXlosXII9rK9Wj6X4 root@np0005545273.novalocal (RSA)
Dec  4 04:35:01 np0005545273 cloud-init[1351]: -----END SSH HOST KEY FINGERPRINTS-----
Dec  4 04:35:01 np0005545273 cloud-init[1356]: #############################################################
Dec  4 04:35:01 np0005545273 cloud-init[1285]: Cloud-init v. 24.4-7.el9 finished at Thu, 04 Dec 2025 09:35:01 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 11.28 seconds
Dec  4 04:35:01 np0005545273 systemd[1]: Finished Cloud-init: Final Stage.
Dec  4 04:35:01 np0005545273 systemd[1]: Reached target Cloud-init target.
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec  4 04:35:02 np0005545273 chronyd[791]: Selected source 138.197.164.54 (2.centos.pool.ntp.org)
Dec  4 04:35:02 np0005545273 chronyd[791]: System clock TAI offset set to 37 seconds
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: memstrack is not available
Dec  4 04:35:02 np0005545273 dracut[1269]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec  4 04:35:02 np0005545273 dracut[1269]: memstrack is not available
Dec  4 04:35:02 np0005545273 dracut[1269]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec  4 04:35:03 np0005545273 dracut[1269]: *** Including module: systemd ***
Dec  4 04:35:03 np0005545273 dracut[1269]: *** Including module: fips ***
Dec  4 04:35:03 np0005545273 dracut[1269]: *** Including module: systemd-initrd ***
Dec  4 04:35:03 np0005545273 dracut[1269]: *** Including module: i18n ***
Dec  4 04:35:03 np0005545273 dracut[1269]: *** Including module: drm ***
Dec  4 04:35:04 np0005545273 dracut[1269]: *** Including module: prefixdevname ***
Dec  4 04:35:04 np0005545273 dracut[1269]: *** Including module: kernel-modules ***
Dec  4 04:35:04 np0005545273 kernel: block vda: the capability attribute has been deprecated.
Dec  4 04:35:05 np0005545273 dracut[1269]: *** Including module: kernel-modules-extra ***
Dec  4 04:35:05 np0005545273 dracut[1269]: *** Including module: qemu ***
Dec  4 04:35:05 np0005545273 dracut[1269]: *** Including module: fstab-sys ***
Dec  4 04:35:05 np0005545273 dracut[1269]: *** Including module: rootfs-block ***
Dec  4 04:35:05 np0005545273 dracut[1269]: *** Including module: terminfo ***
Dec  4 04:35:05 np0005545273 dracut[1269]: *** Including module: udev-rules ***
Dec  4 04:35:05 np0005545273 dracut[1269]: Skipping udev rule: 91-permissions.rules
Dec  4 04:35:05 np0005545273 dracut[1269]: Skipping udev rule: 80-drivers-modprobe.rules
Dec  4 04:35:05 np0005545273 dracut[1269]: *** Including module: virtiofs ***
Dec  4 04:35:06 np0005545273 dracut[1269]: *** Including module: dracut-systemd ***
Dec  4 04:35:06 np0005545273 dracut[1269]: *** Including module: usrmount ***
Dec  4 04:35:06 np0005545273 dracut[1269]: *** Including module: base ***
Dec  4 04:35:06 np0005545273 irqbalance[793]: Cannot change IRQ 25 affinity: Operation not permitted
Dec  4 04:35:06 np0005545273 irqbalance[793]: IRQ 25 affinity is now unmanaged
Dec  4 04:35:06 np0005545273 irqbalance[793]: Cannot change IRQ 31 affinity: Operation not permitted
Dec  4 04:35:06 np0005545273 irqbalance[793]: IRQ 31 affinity is now unmanaged
Dec  4 04:35:06 np0005545273 irqbalance[793]: Cannot change IRQ 28 affinity: Operation not permitted
Dec  4 04:35:06 np0005545273 irqbalance[793]: IRQ 28 affinity is now unmanaged
Dec  4 04:35:06 np0005545273 irqbalance[793]: Cannot change IRQ 32 affinity: Operation not permitted
Dec  4 04:35:06 np0005545273 irqbalance[793]: IRQ 32 affinity is now unmanaged
Dec  4 04:35:06 np0005545273 irqbalance[793]: Cannot change IRQ 30 affinity: Operation not permitted
Dec  4 04:35:06 np0005545273 irqbalance[793]: IRQ 30 affinity is now unmanaged
Dec  4 04:35:06 np0005545273 irqbalance[793]: Cannot change IRQ 29 affinity: Operation not permitted
Dec  4 04:35:06 np0005545273 irqbalance[793]: IRQ 29 affinity is now unmanaged
Dec  4 04:35:06 np0005545273 dracut[1269]: *** Including module: fs-lib ***
Dec  4 04:35:06 np0005545273 dracut[1269]: *** Including module: kdumpbase ***
Dec  4 04:35:06 np0005545273 dracut[1269]: *** Including module: microcode_ctl-fw_dir_override ***
Dec  4 04:35:06 np0005545273 dracut[1269]:  microcode_ctl module: mangling fw_dir
Dec  4 04:35:06 np0005545273 dracut[1269]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Dec  4 04:35:06 np0005545273 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec  4 04:35:07 np0005545273 dracut[1269]:    microcode_ctl: configuration "intel" is ignored
Dec  4 04:35:07 np0005545273 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec  4 04:35:07 np0005545273 dracut[1269]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec  4 04:35:07 np0005545273 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec  4 04:35:07 np0005545273 dracut[1269]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec  4 04:35:07 np0005545273 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec  4 04:35:07 np0005545273 dracut[1269]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec  4 04:35:07 np0005545273 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec  4 04:35:07 np0005545273 dracut[1269]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Dec  4 04:35:07 np0005545273 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec  4 04:35:07 np0005545273 dracut[1269]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec  4 04:35:07 np0005545273 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec  4 04:35:07 np0005545273 dracut[1269]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec  4 04:35:07 np0005545273 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec  4 04:35:07 np0005545273 dracut[1269]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec  4 04:35:07 np0005545273 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec  4 04:35:07 np0005545273 dracut[1269]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec  4 04:35:07 np0005545273 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Dec  4 04:35:07 np0005545273 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  4 04:35:07 np0005545273 dracut[1269]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Dec  4 04:35:07 np0005545273 dracut[1269]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Dec  4 04:35:07 np0005545273 dracut[1269]: *** Including module: openssl ***
Dec  4 04:35:07 np0005545273 dracut[1269]: *** Including module: shutdown ***
Dec  4 04:35:07 np0005545273 dracut[1269]: *** Including module: squash ***
Dec  4 04:35:07 np0005545273 dracut[1269]: *** Including modules done ***
Dec  4 04:35:07 np0005545273 dracut[1269]: *** Installing kernel module dependencies ***
Dec  4 04:35:08 np0005545273 dracut[1269]: *** Installing kernel module dependencies done ***
Dec  4 04:35:08 np0005545273 dracut[1269]: *** Resolving executable dependencies ***
Dec  4 04:35:10 np0005545273 dracut[1269]: *** Resolving executable dependencies done ***
Dec  4 04:35:10 np0005545273 dracut[1269]: *** Generating early-microcode cpio image ***
Dec  4 04:35:10 np0005545273 dracut[1269]: *** Store current command line parameters ***
Dec  4 04:35:10 np0005545273 dracut[1269]: Stored kernel commandline:
Dec  4 04:35:10 np0005545273 dracut[1269]: No dracut internal kernel commandline stored in the initramfs
Dec  4 04:35:10 np0005545273 dracut[1269]: *** Install squash loader ***
Dec  4 04:35:11 np0005545273 dracut[1269]: *** Squashing the files inside the initramfs ***
Dec  4 04:35:12 np0005545273 dracut[1269]: *** Squashing the files inside the initramfs done ***
Dec  4 04:35:12 np0005545273 dracut[1269]: *** Creating image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' ***
Dec  4 04:35:12 np0005545273 dracut[1269]: *** Hardlinking files ***
Dec  4 04:35:12 np0005545273 dracut[1269]: *** Hardlinking files done ***
Dec  4 04:35:13 np0005545273 dracut[1269]: *** Creating initramfs image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' done ***
Dec  4 04:35:13 np0005545273 kdumpctl[1014]: kdump: kexec: loaded kdump kernel
Dec  4 04:35:13 np0005545273 kdumpctl[1014]: kdump: Starting kdump: [OK]
Dec  4 04:35:13 np0005545273 systemd[1]: Finished Crash recovery kernel arming.
Dec  4 04:35:13 np0005545273 systemd[1]: Startup finished in 1.545s (kernel) + 2.804s (initrd) + 19.085s (userspace) = 23.435s.
Dec  4 04:35:26 np0005545273 systemd[1]: Created slice User Slice of UID 1000.
Dec  4 04:35:26 np0005545273 systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec  4 04:35:26 np0005545273 systemd-logind[798]: New session 1 of user zuul.
Dec  4 04:35:26 np0005545273 systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec  4 04:35:26 np0005545273 systemd[1]: Starting User Manager for UID 1000...
Dec  4 04:35:26 np0005545273 systemd[4300]: Queued start job for default target Main User Target.
Dec  4 04:35:26 np0005545273 systemd[4300]: Created slice User Application Slice.
Dec  4 04:35:26 np0005545273 systemd[4300]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  4 04:35:26 np0005545273 systemd[4300]: Started Daily Cleanup of User's Temporary Directories.
Dec  4 04:35:26 np0005545273 systemd[4300]: Reached target Paths.
Dec  4 04:35:26 np0005545273 systemd[4300]: Reached target Timers.
Dec  4 04:35:26 np0005545273 systemd[4300]: Starting D-Bus User Message Bus Socket...
Dec  4 04:35:26 np0005545273 systemd[4300]: Starting Create User's Volatile Files and Directories...
Dec  4 04:35:26 np0005545273 systemd[4300]: Finished Create User's Volatile Files and Directories.
Dec  4 04:35:26 np0005545273 systemd[4300]: Listening on D-Bus User Message Bus Socket.
Dec  4 04:35:26 np0005545273 systemd[4300]: Reached target Sockets.
Dec  4 04:35:26 np0005545273 systemd[4300]: Reached target Basic System.
Dec  4 04:35:26 np0005545273 systemd[4300]: Reached target Main User Target.
Dec  4 04:35:26 np0005545273 systemd[4300]: Startup finished in 144ms.
Dec  4 04:35:26 np0005545273 systemd[1]: Started User Manager for UID 1000.
Dec  4 04:35:26 np0005545273 systemd[1]: Started Session 1 of User zuul.
Dec  4 04:35:27 np0005545273 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  4 04:35:27 np0005545273 python3[4382]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 04:35:29 np0005545273 python3[4413]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 04:35:35 np0005545273 python3[4471]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 04:35:36 np0005545273 python3[4511]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec  4 04:35:38 np0005545273 python3[4537]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUqQ+zl6uP5KOngryJfCkwhsXDB3oKN/oaspiL29U/2htEnlgClVIUqWUFROF9cojHZrJS7yBFbep+K7ia1Dx6zoAwADAOWyndh0dCkGDk9PTh2TgGHSQ+BDm3L+v+bpMHl7fZDiUdLCZLuouKBKSqV1nOImjFhsiHQaiUcQYKlxCVEaG5PbbYj0kOFUYLN6FjLRLs/8sCfmdl0sBkaM1E+Dj41CnuhXDYr6n/CzIdZAArx0j5DLsaOpDRSZdS6Y04CWdMye4E3mL4kCMwB1WxEL4vtopwfrXpAVDbn4E1Nh9WO27G6m3IWcnjGdzl0T4Pxvp1nE4ocR3R9/TnobaQoLbqzDn1HHMMpWfg5WePf/GrAWUir8gFZpHb6Fuw4nTgL+wZs2wViNFZ+4aEEwsXrhmRVHmFsr4XGALR+VaJjLh30YeRgdX1iy+3t2vEwnUef2eo+0KrVrAYMEJGiQecTsjVe7nW7c6JwoRy+eTI0qY6LVA7Dbgmwj7EhlUaPoE= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 04:35:38 np0005545273 python3[4561]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 04:35:39 np0005545273 python3[4660]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 04:35:39 np0005545273 python3[4731]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764840938.6559348-207-228925058330639/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=97d96a2127d94d00a8de10b9a25007d0_id_rsa follow=False checksum=4a2583d826b5c5c32fdb603a217b55fd5664c5ca backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 04:35:40 np0005545273 python3[4854]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 04:35:40 np0005545273 python3[4925]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764840939.6373396-240-22004543482279/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=97d96a2127d94d00a8de10b9a25007d0_id_rsa.pub follow=False checksum=0ac96abdd642eb78b0b0bdefaa890f144fcc6145 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 04:35:41 np0005545273 python3[4973]: ansible-ping Invoked with data=pong
Dec  4 04:35:42 np0005545273 python3[4997]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 04:35:44 np0005545273 python3[5055]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec  4 04:35:45 np0005545273 python3[5087]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 04:35:45 np0005545273 python3[5111]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 04:35:45 np0005545273 python3[5135]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 04:35:46 np0005545273 python3[5159]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 04:35:46 np0005545273 python3[5183]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 04:35:46 np0005545273 python3[5207]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 04:35:48 np0005545273 python3[5233]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 04:35:48 np0005545273 python3[5311]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 04:35:49 np0005545273 python3[5384]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764840948.3267643-21-244587707564640/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 04:35:49 np0005545273 python3[5432]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 04:35:50 np0005545273 python3[5456]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 04:35:50 np0005545273 python3[5480]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 04:35:50 np0005545273 python3[5504]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 04:35:50 np0005545273 python3[5528]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 04:35:51 np0005545273 python3[5552]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 04:35:51 np0005545273 python3[5576]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 04:35:51 np0005545273 python3[5600]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 04:35:52 np0005545273 python3[5624]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 04:35:52 np0005545273 python3[5648]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 04:35:52 np0005545273 python3[5672]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 04:35:52 np0005545273 python3[5696]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 04:35:53 np0005545273 python3[5720]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 04:35:53 np0005545273 python3[5744]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 04:35:53 np0005545273 python3[5768]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 04:35:54 np0005545273 python3[5792]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 04:35:54 np0005545273 python3[5816]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 04:35:54 np0005545273 python3[5840]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 04:35:55 np0005545273 python3[5864]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 04:35:55 np0005545273 python3[5888]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 04:35:55 np0005545273 python3[5912]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 04:35:55 np0005545273 python3[5936]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 04:35:56 np0005545273 python3[5960]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 04:35:56 np0005545273 python3[5984]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 04:35:56 np0005545273 python3[6008]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 04:35:57 np0005545273 python3[6032]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 04:35:59 np0005545273 python3[6058]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  4 04:35:59 np0005545273 systemd[1]: Starting Time & Date Service...
Dec  4 04:35:59 np0005545273 systemd[1]: Started Time & Date Service.
Dec  4 04:35:59 np0005545273 systemd-timedated[6060]: Changed time zone to 'UTC' (UTC).
Dec  4 04:36:00 np0005545273 python3[6089]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 04:36:00 np0005545273 python3[6165]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 04:36:01 np0005545273 python3[6236]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764840960.5124042-153-273856972897201/source _original_basename=tmpn3uswt3_ follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 04:36:01 np0005545273 python3[6336]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 04:36:02 np0005545273 python3[6407]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764840961.385366-183-257220758849323/source _original_basename=tmpvqw8h3x7 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 04:36:02 np0005545273 python3[6509]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 04:36:03 np0005545273 python3[6582]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764840962.5349348-231-272022659480525/source _original_basename=tmpcyv732f2 follow=False checksum=7a82bff5b5e9039ad1ac15f6a7286925b777bf85 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 04:36:03 np0005545273 python3[6630]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 04:36:04 np0005545273 python3[6656]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 04:36:04 np0005545273 python3[6736]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 04:36:05 np0005545273 python3[6809]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764840964.2568264-273-187044394242049/source _original_basename=tmp1nbchae8 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 04:36:05 np0005545273 python3[6860]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-c10e-9286-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 04:36:06 np0005545273 python3[6888]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-c10e-9286-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec  4 04:36:06 np0005545273 irqbalance[793]: Cannot change IRQ 26 affinity: Operation not permitted
Dec  4 04:36:06 np0005545273 irqbalance[793]: IRQ 26 affinity is now unmanaged
Dec  4 04:36:07 np0005545273 python3[6916]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 04:36:08 np0005545273 chronyd[791]: Selected source 174.138.193.90 (2.centos.pool.ntp.org)
Dec  4 04:36:24 np0005545273 python3[6942]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 04:36:29 np0005545273 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  4 04:37:03 np0005545273 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec  4 04:37:03 np0005545273 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Dec  4 04:37:03 np0005545273 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Dec  4 04:37:03 np0005545273 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Dec  4 04:37:03 np0005545273 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Dec  4 04:37:03 np0005545273 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Dec  4 04:37:03 np0005545273 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Dec  4 04:37:03 np0005545273 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Dec  4 04:37:03 np0005545273 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Dec  4 04:37:03 np0005545273 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec  4 04:37:03 np0005545273 NetworkManager[860]: <info>  [1764841023.8714] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  4 04:37:03 np0005545273 systemd-udevd[6946]: Network interface NamePolicy= disabled on kernel command line.
Dec  4 04:37:03 np0005545273 NetworkManager[860]: <info>  [1764841023.8899] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  4 04:37:03 np0005545273 NetworkManager[860]: <info>  [1764841023.8934] settings: (eth1): created default wired connection 'Wired connection 1'
Dec  4 04:37:03 np0005545273 NetworkManager[860]: <info>  [1764841023.8940] device (eth1): carrier: link connected
Dec  4 04:37:03 np0005545273 NetworkManager[860]: <info>  [1764841023.8944] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec  4 04:37:03 np0005545273 NetworkManager[860]: <info>  [1764841023.8953] policy: auto-activating connection 'Wired connection 1' (e28c0e0c-6ca0-32c5-afa3-1d5d772b4e93)
Dec  4 04:37:03 np0005545273 NetworkManager[860]: <info>  [1764841023.8959] device (eth1): Activation: starting connection 'Wired connection 1' (e28c0e0c-6ca0-32c5-afa3-1d5d772b4e93)
Dec  4 04:37:03 np0005545273 NetworkManager[860]: <info>  [1764841023.8961] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  4 04:37:03 np0005545273 NetworkManager[860]: <info>  [1764841023.8965] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  4 04:37:03 np0005545273 NetworkManager[860]: <info>  [1764841023.8970] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  4 04:37:03 np0005545273 NetworkManager[860]: <info>  [1764841023.8976] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  4 04:37:05 np0005545273 python3[6972]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-0f59-cfe6-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 04:37:15 np0005545273 python3[7052]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 04:37:15 np0005545273 python3[7125]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764841034.8190563-102-173663138953724/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=74e396badec11bd73909255d1e70547a105775dc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 04:37:16 np0005545273 python3[7175]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  4 04:37:16 np0005545273 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec  4 04:37:16 np0005545273 systemd[1]: Stopped Network Manager Wait Online.
Dec  4 04:37:16 np0005545273 systemd[1]: Stopping Network Manager Wait Online...
Dec  4 04:37:16 np0005545273 systemd[1]: Stopping Network Manager...
Dec  4 04:37:16 np0005545273 NetworkManager[860]: <info>  [1764841036.4579] caught SIGTERM, shutting down normally.
Dec  4 04:37:16 np0005545273 NetworkManager[860]: <info>  [1764841036.4591] dhcp4 (eth0): canceled DHCP transaction
Dec  4 04:37:16 np0005545273 NetworkManager[860]: <info>  [1764841036.4592] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  4 04:37:16 np0005545273 NetworkManager[860]: <info>  [1764841036.4592] dhcp4 (eth0): state changed no lease
Dec  4 04:37:16 np0005545273 NetworkManager[860]: <info>  [1764841036.4594] manager: NetworkManager state is now CONNECTING
Dec  4 04:37:16 np0005545273 NetworkManager[860]: <info>  [1764841036.4691] dhcp4 (eth1): canceled DHCP transaction
Dec  4 04:37:16 np0005545273 NetworkManager[860]: <info>  [1764841036.4691] dhcp4 (eth1): state changed no lease
Dec  4 04:37:16 np0005545273 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  4 04:37:16 np0005545273 NetworkManager[860]: <info>  [1764841036.4751] exiting (success)
Dec  4 04:37:16 np0005545273 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  4 04:37:16 np0005545273 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec  4 04:37:16 np0005545273 systemd[1]: Stopped Network Manager.
Dec  4 04:37:16 np0005545273 systemd[1]: NetworkManager.service: Consumed 1.054s CPU time, 10.0M memory peak.
Dec  4 04:37:16 np0005545273 systemd[1]: Starting Network Manager...
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.5277] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:df4fb9d0-81a4-4e5e-8b88-c0920d7ba5e9)
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.5284] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.5342] manager[0x562850136070]: monitoring kernel firmware directory '/lib/firmware'.
Dec  4 04:37:16 np0005545273 systemd[1]: Starting Hostname Service...
Dec  4 04:37:16 np0005545273 systemd[1]: Started Hostname Service.
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6535] hostname: hostname: using hostnamed
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6539] hostname: static hostname changed from (none) to "np0005545273.novalocal"
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6546] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6552] manager[0x562850136070]: rfkill: Wi-Fi hardware radio set enabled
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6552] manager[0x562850136070]: rfkill: WWAN hardware radio set enabled
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6582] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6582] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6583] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6583] manager: Networking is enabled by state file
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6586] settings: Loaded settings plugin: keyfile (internal)
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6589] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6614] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6624] dhcp: init: Using DHCP client 'internal'
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6626] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6633] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6638] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6646] device (lo): Activation: starting connection 'lo' (3cd632aa-e4f7-4e63-bb4d-c1d9ec185b32)
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6652] device (eth0): carrier: link connected
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6657] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6662] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6662] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6667] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6674] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6680] device (eth1): carrier: link connected
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6685] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6689] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (e28c0e0c-6ca0-32c5-afa3-1d5d772b4e93) (indicated)
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6689] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6695] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6703] device (eth1): Activation: starting connection 'Wired connection 1' (e28c0e0c-6ca0-32c5-afa3-1d5d772b4e93)
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6709] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6713] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  4 04:37:16 np0005545273 systemd[1]: Started Network Manager.
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6725] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6728] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6731] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6734] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6736] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6739] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6745] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6752] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6755] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6765] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6768] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6788] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6790] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6796] device (lo): Activation: successful, device activated.
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6805] dhcp4 (eth0): state changed new lease, address=38.102.83.169
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6814] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  4 04:37:16 np0005545273 systemd[1]: Starting Network Manager Wait Online...
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6898] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6919] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6922] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6926] manager: NetworkManager state is now CONNECTED_SITE
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6936] device (eth0): Activation: successful, device activated.
Dec  4 04:37:16 np0005545273 NetworkManager[7184]: <info>  [1764841036.6942] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  4 04:37:17 np0005545273 python3[7259]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-0f59-cfe6-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 04:37:26 np0005545273 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  4 04:37:41 np0005545273 systemd[4300]: Starting Mark boot as successful...
Dec  4 04:37:41 np0005545273 systemd[4300]: Finished Mark boot as successful.
Dec  4 04:37:46 np0005545273 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  4 04:38:02 np0005545273 NetworkManager[7184]: <info>  [1764841082.3565] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  4 04:38:02 np0005545273 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  4 04:38:02 np0005545273 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  4 04:38:02 np0005545273 NetworkManager[7184]: <info>  [1764841082.3898] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  4 04:38:02 np0005545273 NetworkManager[7184]: <info>  [1764841082.3902] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  4 04:38:02 np0005545273 NetworkManager[7184]: <info>  [1764841082.3914] device (eth1): Activation: successful, device activated.
Dec  4 04:38:02 np0005545273 NetworkManager[7184]: <info>  [1764841082.3924] manager: startup complete
Dec  4 04:38:02 np0005545273 NetworkManager[7184]: <info>  [1764841082.3928] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Dec  4 04:38:02 np0005545273 NetworkManager[7184]: <warn>  [1764841082.3937] device (eth1): Activation: failed for connection 'Wired connection 1'
Dec  4 04:38:02 np0005545273 NetworkManager[7184]: <info>  [1764841082.3948] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Dec  4 04:38:02 np0005545273 systemd[1]: Finished Network Manager Wait Online.
Dec  4 04:38:02 np0005545273 NetworkManager[7184]: <info>  [1764841082.4021] dhcp4 (eth1): canceled DHCP transaction
Dec  4 04:38:02 np0005545273 NetworkManager[7184]: <info>  [1764841082.4022] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  4 04:38:02 np0005545273 NetworkManager[7184]: <info>  [1764841082.4022] dhcp4 (eth1): state changed no lease
Dec  4 04:38:02 np0005545273 NetworkManager[7184]: <info>  [1764841082.4043] policy: auto-activating connection 'ci-private-network' (92b9209e-aa34-525f-93ad-a8f9725aec53)
Dec  4 04:38:02 np0005545273 NetworkManager[7184]: <info>  [1764841082.4052] device (eth1): Activation: starting connection 'ci-private-network' (92b9209e-aa34-525f-93ad-a8f9725aec53)
Dec  4 04:38:02 np0005545273 NetworkManager[7184]: <info>  [1764841082.4054] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  4 04:38:02 np0005545273 NetworkManager[7184]: <info>  [1764841082.4057] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  4 04:38:02 np0005545273 NetworkManager[7184]: <info>  [1764841082.4068] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  4 04:38:02 np0005545273 NetworkManager[7184]: <info>  [1764841082.4082] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  4 04:38:02 np0005545273 NetworkManager[7184]: <info>  [1764841082.4142] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  4 04:38:02 np0005545273 NetworkManager[7184]: <info>  [1764841082.4144] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  4 04:38:02 np0005545273 NetworkManager[7184]: <info>  [1764841082.4155] device (eth1): Activation: successful, device activated.
Dec  4 04:38:12 np0005545273 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  4 04:38:17 np0005545273 systemd-logind[798]: Session 1 logged out. Waiting for processes to exit.
Dec  4 04:38:22 np0005545273 systemd-logind[798]: New session 3 of user zuul.
Dec  4 04:38:22 np0005545273 systemd[1]: Started Session 3 of User zuul.
Dec  4 04:38:22 np0005545273 python3[7371]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 04:38:22 np0005545273 python3[7444]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764841102.1596544-267-162519117284172/source _original_basename=tmpip6e7upn follow=False checksum=ff6fb6bb40e9eca3d2188a5a673f0d4ae4acf72d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 04:38:25 np0005545273 systemd-logind[798]: Session 3 logged out. Waiting for processes to exit.
Dec  4 04:38:25 np0005545273 systemd[1]: session-3.scope: Deactivated successfully.
Dec  4 04:38:25 np0005545273 systemd-logind[798]: Removed session 3.
Dec  4 04:40:41 np0005545273 systemd[4300]: Created slice User Background Tasks Slice.
Dec  4 04:40:41 np0005545273 systemd[4300]: Starting Cleanup of User's Temporary Files and Directories...
Dec  4 04:40:41 np0005545273 systemd[4300]: Finished Cleanup of User's Temporary Files and Directories.
Dec  4 04:45:57 np0005545273 systemd-logind[798]: New session 4 of user zuul.
Dec  4 04:45:57 np0005545273 systemd[1]: Started Session 4 of User zuul.
Dec  4 04:45:57 np0005545273 python3[7509]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-498b-906a-000000001cda-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 04:45:57 np0005545273 python3[7537]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 04:45:58 np0005545273 python3[7564]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 04:45:58 np0005545273 python3[7590]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 04:45:58 np0005545273 python3[7616]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 04:45:59 np0005545273 python3[7642]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 04:45:59 np0005545273 python3[7720]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 04:46:01 np0005545273 python3[7793]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764841559.552521-479-178493040891327/source _original_basename=tmpdf5infue follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 04:46:01 np0005545273 python3[7843]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  4 04:46:02 np0005545273 systemd[1]: Reloading.
Dec  4 04:46:02 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 04:46:03 np0005545273 python3[7898]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec  4 04:46:04 np0005545273 python3[7924]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 04:46:04 np0005545273 python3[7952]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 04:46:04 np0005545273 python3[7980]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 04:46:05 np0005545273 python3[8008]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 04:46:05 np0005545273 python3[8035]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-498b-906a-000000001ce1-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 04:46:06 np0005545273 python3[8065]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  4 04:46:07 np0005545273 systemd[1]: session-4.scope: Deactivated successfully.
Dec  4 04:46:07 np0005545273 systemd[1]: session-4.scope: Consumed 4.746s CPU time.
Dec  4 04:46:07 np0005545273 systemd-logind[798]: Session 4 logged out. Waiting for processes to exit.
Dec  4 04:46:07 np0005545273 systemd-logind[798]: Removed session 4.
Dec  4 04:46:09 np0005545273 systemd-logind[798]: New session 5 of user zuul.
Dec  4 04:46:09 np0005545273 systemd[1]: Started Session 5 of User zuul.
Dec  4 04:46:09 np0005545273 python3[8099]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  4 04:46:25 np0005545273 kernel: SELinux:  Converting 385 SID table entries...
Dec  4 04:46:25 np0005545273 kernel: SELinux:  policy capability network_peer_controls=1
Dec  4 04:46:25 np0005545273 kernel: SELinux:  policy capability open_perms=1
Dec  4 04:46:25 np0005545273 kernel: SELinux:  policy capability extended_socket_class=1
Dec  4 04:46:25 np0005545273 kernel: SELinux:  policy capability always_check_network=0
Dec  4 04:46:25 np0005545273 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  4 04:46:25 np0005545273 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  4 04:46:25 np0005545273 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  4 04:46:34 np0005545273 kernel: SELinux:  Converting 385 SID table entries...
Dec  4 04:46:34 np0005545273 kernel: SELinux:  policy capability network_peer_controls=1
Dec  4 04:46:34 np0005545273 kernel: SELinux:  policy capability open_perms=1
Dec  4 04:46:34 np0005545273 kernel: SELinux:  policy capability extended_socket_class=1
Dec  4 04:46:34 np0005545273 kernel: SELinux:  policy capability always_check_network=0
Dec  4 04:46:34 np0005545273 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  4 04:46:34 np0005545273 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  4 04:46:34 np0005545273 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  4 04:46:43 np0005545273 kernel: SELinux:  Converting 385 SID table entries...
Dec  4 04:46:43 np0005545273 kernel: SELinux:  policy capability network_peer_controls=1
Dec  4 04:46:43 np0005545273 kernel: SELinux:  policy capability open_perms=1
Dec  4 04:46:43 np0005545273 kernel: SELinux:  policy capability extended_socket_class=1
Dec  4 04:46:43 np0005545273 kernel: SELinux:  policy capability always_check_network=0
Dec  4 04:46:43 np0005545273 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  4 04:46:43 np0005545273 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  4 04:46:43 np0005545273 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  4 04:46:44 np0005545273 setsebool[8165]: The virt_use_nfs policy boolean was changed to 1 by root
Dec  4 04:46:44 np0005545273 setsebool[8165]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec  4 04:46:55 np0005545273 kernel: SELinux:  Converting 388 SID table entries...
Dec  4 04:46:55 np0005545273 kernel: SELinux:  policy capability network_peer_controls=1
Dec  4 04:46:55 np0005545273 kernel: SELinux:  policy capability open_perms=1
Dec  4 04:46:55 np0005545273 kernel: SELinux:  policy capability extended_socket_class=1
Dec  4 04:46:55 np0005545273 kernel: SELinux:  policy capability always_check_network=0
Dec  4 04:46:55 np0005545273 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  4 04:46:55 np0005545273 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  4 04:46:55 np0005545273 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  4 04:47:12 np0005545273 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec  4 04:47:12 np0005545273 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  4 04:47:12 np0005545273 systemd[1]: Starting man-db-cache-update.service...
Dec  4 04:47:12 np0005545273 systemd[1]: Reloading.
Dec  4 04:47:13 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 04:47:13 np0005545273 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  4 04:47:32 np0005545273 python3[17676]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-7a6d-a5d1-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 04:47:33 np0005545273 kernel: evm: overlay not supported
Dec  4 04:47:33 np0005545273 systemd[4300]: Starting D-Bus User Message Bus...
Dec  4 04:47:33 np0005545273 dbus-broker-launch[18074]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec  4 04:47:33 np0005545273 dbus-broker-launch[18074]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec  4 04:47:33 np0005545273 systemd[4300]: Started D-Bus User Message Bus.
Dec  4 04:47:33 np0005545273 dbus-broker-lau[18074]: Ready
Dec  4 04:47:33 np0005545273 systemd[4300]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec  4 04:47:33 np0005545273 systemd[4300]: Created slice Slice /user.
Dec  4 04:47:33 np0005545273 systemd[4300]: podman-18011.scope: unit configures an IP firewall, but not running as root.
Dec  4 04:47:33 np0005545273 systemd[4300]: (This warning is only shown for the first unit using IP firewalling.)
Dec  4 04:47:33 np0005545273 systemd[4300]: Started podman-18011.scope.
Dec  4 04:47:33 np0005545273 systemd[4300]: Started podman-pause-337037d6.scope.
Dec  4 04:47:34 np0005545273 python3[18346]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.73:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.73:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 04:47:34 np0005545273 python3[18346]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Dec  4 04:47:34 np0005545273 systemd-logind[798]: Session 5 logged out. Waiting for processes to exit.
Dec  4 04:47:34 np0005545273 systemd[1]: session-5.scope: Deactivated successfully.
Dec  4 04:47:34 np0005545273 systemd[1]: session-5.scope: Consumed 59.194s CPU time.
Dec  4 04:47:34 np0005545273 systemd-logind[798]: Removed session 5.
Dec  4 04:47:46 np0005545273 irqbalance[793]: Cannot change IRQ 27 affinity: Operation not permitted
Dec  4 04:47:46 np0005545273 irqbalance[793]: IRQ 27 affinity is now unmanaged
Dec  4 04:47:59 np0005545273 systemd-logind[798]: New session 6 of user zuul.
Dec  4 04:47:59 np0005545273 systemd[1]: Started Session 6 of User zuul.
Dec  4 04:48:00 np0005545273 python3[26613]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDo+SIM7dQ84iyV1xgijokMsOaxlQFhYszhuuPRuvUmZ/3GmJeJAn48BSIn6R3D70IagTKyKdJYxZwXC9nloQBw= zuul@np0005545272.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 04:48:00 np0005545273 python3[26821]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDo+SIM7dQ84iyV1xgijokMsOaxlQFhYszhuuPRuvUmZ/3GmJeJAn48BSIn6R3D70IagTKyKdJYxZwXC9nloQBw= zuul@np0005545272.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 04:48:01 np0005545273 python3[27120]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005545273.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec  4 04:48:01 np0005545273 python3[27343]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDo+SIM7dQ84iyV1xgijokMsOaxlQFhYszhuuPRuvUmZ/3GmJeJAn48BSIn6R3D70IagTKyKdJYxZwXC9nloQBw= zuul@np0005545272.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 04:48:02 np0005545273 python3[27649]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 04:48:02 np0005545273 python3[27903]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764841682.0918944-135-246479739412575/source _original_basename=tmp30zivo4v follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 04:48:03 np0005545273 python3[28182]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Dec  4 04:48:03 np0005545273 systemd[1]: Starting Hostname Service...
Dec  4 04:48:03 np0005545273 systemd[1]: Started Hostname Service.
Dec  4 04:48:03 np0005545273 systemd-hostnamed[28291]: Changed pretty hostname to 'compute-0'
Dec  4 04:48:03 np0005545273 systemd-hostnamed[28291]: Hostname set to <compute-0> (static)
Dec  4 04:48:03 np0005545273 NetworkManager[7184]: <info>  [1764841683.9466] hostname: static hostname changed from "np0005545273.novalocal" to "compute-0"
Dec  4 04:48:03 np0005545273 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  4 04:48:03 np0005545273 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  4 04:48:04 np0005545273 systemd-logind[798]: Session 6 logged out. Waiting for processes to exit.
Dec  4 04:48:04 np0005545273 systemd[1]: session-6.scope: Deactivated successfully.
Dec  4 04:48:04 np0005545273 systemd[1]: session-6.scope: Consumed 2.425s CPU time.
Dec  4 04:48:04 np0005545273 systemd-logind[798]: Removed session 6.
Dec  4 04:48:08 np0005545273 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  4 04:48:08 np0005545273 systemd[1]: Finished man-db-cache-update.service.
Dec  4 04:48:08 np0005545273 systemd[1]: man-db-cache-update.service: Consumed 1min 6.035s CPU time.
Dec  4 04:48:08 np0005545273 systemd[1]: run-r1ae6d83a120f43108486f2c8e19e0c92.service: Deactivated successfully.
Dec  4 04:48:14 np0005545273 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  4 04:48:34 np0005545273 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  4 04:50:31 np0005545273 systemd[1]: Starting Cleanup of Temporary Directories...
Dec  4 04:50:31 np0005545273 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec  4 04:50:31 np0005545273 systemd[1]: Finished Cleanup of Temporary Directories.
Dec  4 04:50:31 np0005545273 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec  4 04:52:30 np0005545273 systemd-logind[798]: New session 7 of user zuul.
Dec  4 04:52:30 np0005545273 systemd[1]: Started Session 7 of User zuul.
Dec  4 04:52:30 np0005545273 python3[30013]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 04:52:32 np0005545273 python3[30130]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 04:52:33 np0005545273 python3[30203]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764841952.2822034-33577-67209919866313/source mode=0755 _original_basename=delorean.repo follow=False checksum=39c885eb875fd03e010d1b0454241c26b121dfb2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 04:52:33 np0005545273 python3[30229]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 04:52:33 np0005545273 python3[30302]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764841952.2822034-33577-67209919866313/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 04:52:34 np0005545273 python3[30328]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 04:52:34 np0005545273 python3[30401]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764841952.2822034-33577-67209919866313/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 04:52:34 np0005545273 python3[30427]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 04:52:35 np0005545273 python3[30500]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764841952.2822034-33577-67209919866313/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 04:52:35 np0005545273 python3[30526]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 04:52:35 np0005545273 python3[30599]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764841952.2822034-33577-67209919866313/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 04:52:36 np0005545273 python3[30625]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 04:52:36 np0005545273 python3[30698]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764841952.2822034-33577-67209919866313/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 04:52:36 np0005545273 python3[30724]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 04:52:37 np0005545273 python3[30797]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764841952.2822034-33577-67209919866313/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6e18e2038d54303b4926db53c0b6cced515a9151 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 04:52:50 np0005545273 python3[30855]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 04:57:50 np0005545273 systemd[1]: session-7.scope: Deactivated successfully.
Dec  4 04:57:50 np0005545273 systemd[1]: session-7.scope: Consumed 5.629s CPU time.
Dec  4 04:57:50 np0005545273 systemd-logind[798]: Session 7 logged out. Waiting for processes to exit.
Dec  4 04:57:50 np0005545273 systemd-logind[798]: Removed session 7.
Dec  4 05:03:41 np0005545273 systemd[1]: Starting dnf makecache...
Dec  4 05:03:42 np0005545273 dnf[30904]: Failed determining last makecache time.
Dec  4 05:03:43 np0005545273 dnf[30904]: delorean-openstack-barbican-42b4c41831408a8e323  20 kB/s |  13 kB     00:00
Dec  4 05:03:43 np0005545273 dnf[30904]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 219 kB/s |  65 kB     00:00
Dec  4 05:03:43 np0005545273 dnf[30904]: delorean-openstack-cinder-1c00d6490d88e436f26ef 827 kB/s |  32 kB     00:00
Dec  4 05:03:44 np0005545273 dnf[30904]: delorean-python-stevedore-c4acc5639fd2329372142 401 kB/s | 131 kB     00:00
Dec  4 05:03:44 np0005545273 dnf[30904]: delorean-python-cloudkitty-tests-tempest-2c80f8 115 kB/s |  32 kB     00:00
Dec  4 05:03:45 np0005545273 dnf[30904]: delorean-os-net-config-d0cedbdb788d43e5c7551df5 264 kB/s | 349 kB     00:01
Dec  4 05:03:47 np0005545273 dnf[30904]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6  22 kB/s |  42 kB     00:01
Dec  4 05:03:48 np0005545273 dnf[30904]: delorean-python-designate-tests-tempest-347fdbc  28 kB/s |  18 kB     00:00
Dec  4 05:03:49 np0005545273 dnf[30904]: delorean-openstack-glance-1fd12c29b339f30fe823e  19 kB/s |  18 kB     00:00
Dec  4 05:03:50 np0005545273 dnf[30904]: delorean-openstack-keystone-e4b40af0ae3698fbbbb  33 kB/s |  29 kB     00:00
Dec  4 05:03:50 np0005545273 dnf[30904]: delorean-openstack-manila-3c01b7181572c95dac462  46 kB/s |  25 kB     00:00
Dec  4 05:03:52 np0005545273 dnf[30904]: delorean-python-whitebox-neutron-tests-tempest-  78 kB/s | 154 kB     00:01
Dec  4 05:03:53 np0005545273 dnf[30904]: delorean-openstack-octavia-ba397f07a7331190208c 239 kB/s |  26 kB     00:00
Dec  4 05:03:53 np0005545273 dnf[30904]: delorean-openstack-watcher-c014f81a8647287f6dcc  24 kB/s |  16 kB     00:00
Dec  4 05:03:54 np0005545273 dnf[30904]: delorean-ansible-config_template-5ccaa22121a7ff  25 kB/s | 7.4 kB     00:00
Dec  4 05:03:54 np0005545273 dnf[30904]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 1.0 MB/s | 144 kB     00:00
Dec  4 05:03:54 np0005545273 dnf[30904]: delorean-openstack-swift-dc98a8463506ac520c469a  77 kB/s |  14 kB     00:00
Dec  4 05:03:55 np0005545273 dnf[30904]: delorean-python-tempestconf-8515371b7cceebd4282  66 kB/s |  53 kB     00:00
Dec  4 05:03:55 np0005545273 dnf[30904]: delorean-openstack-heat-ui-013accbfd179753bc3f0 1.1 MB/s |  96 kB     00:00
Dec  4 05:03:55 np0005545273 dnf[30904]: CentOS Stream 9 - BaseOS                         75 kB/s | 7.0 kB     00:00
Dec  4 05:03:55 np0005545273 dnf[30904]: CentOS Stream 9 - AppStream                      68 kB/s | 7.1 kB     00:00
Dec  4 05:03:56 np0005545273 dnf[30904]: CentOS Stream 9 - CRB                            29 kB/s | 6.9 kB     00:00
Dec  4 05:03:56 np0005545273 dnf[30904]: CentOS Stream 9 - Extras packages                76 kB/s | 8.3 kB     00:00
Dec  4 05:03:56 np0005545273 dnf[30904]: dlrn-antelope-testing                           2.7 MB/s | 1.1 MB     00:00
Dec  4 05:03:57 np0005545273 dnf[30904]: dlrn-antelope-build-deps                        1.6 MB/s | 461 kB     00:00
Dec  4 05:03:57 np0005545273 dnf[30904]: centos9-rabbitmq                                1.3 MB/s | 123 kB     00:00
Dec  4 05:03:57 np0005545273 dnf[30904]: centos9-storage                                 1.4 MB/s | 415 kB     00:00
Dec  4 05:03:58 np0005545273 dnf[30904]: centos9-opstools                                124 kB/s |  51 kB     00:00
Dec  4 05:03:59 np0005545273 dnf[30904]: NFV SIG OpenvSwitch                             438 kB/s | 456 kB     00:01
Dec  4 05:04:01 np0005545273 dnf[30904]: repo-setup-centos-appstream                      13 MB/s |  25 MB     00:02
Dec  4 05:04:08 np0005545273 dnf[30904]: repo-setup-centos-baseos                         16 MB/s | 8.8 MB     00:00
Dec  4 05:04:09 np0005545273 dnf[30904]: repo-setup-centos-highavailability              6.4 MB/s | 744 kB     00:00
Dec  4 05:04:10 np0005545273 dnf[30904]: repo-setup-centos-powertools                     21 MB/s | 7.3 MB     00:00
Dec  4 05:04:14 np0005545273 dnf[30904]: Extra Packages for Enterprise Linux 9 - x86_64  8.5 MB/s |  20 MB     00:02
Dec  4 05:04:30 np0005545273 dnf[30904]: Metadata cache created.
Dec  4 05:04:30 np0005545273 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec  4 05:04:30 np0005545273 systemd[1]: Finished dnf makecache.
Dec  4 05:04:30 np0005545273 systemd[1]: dnf-makecache.service: Consumed 26.373s CPU time.
Dec  4 05:04:52 np0005545273 systemd-logind[798]: New session 8 of user zuul.
Dec  4 05:04:52 np0005545273 systemd[1]: Started Session 8 of User zuul.
Dec  4 05:04:53 np0005545273 python3.9[31173]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:04:55 np0005545273 python3.9[31354]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:05:03 np0005545273 systemd[1]: session-8.scope: Deactivated successfully.
Dec  4 05:05:03 np0005545273 systemd[1]: session-8.scope: Consumed 8.031s CPU time.
Dec  4 05:05:03 np0005545273 systemd-logind[798]: Session 8 logged out. Waiting for processes to exit.
Dec  4 05:05:03 np0005545273 systemd-logind[798]: Removed session 8.
Dec  4 05:05:18 np0005545273 systemd-logind[798]: New session 9 of user zuul.
Dec  4 05:05:18 np0005545273 systemd[1]: Started Session 9 of User zuul.
Dec  4 05:05:19 np0005545273 python3.9[31570]: ansible-ansible.legacy.ping Invoked with data=pong
Dec  4 05:05:20 np0005545273 python3.9[31744]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:05:21 np0005545273 python3.9[31896]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:05:22 np0005545273 python3.9[32049]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:05:23 np0005545273 python3.9[32201]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:05:24 np0005545273 python3.9[32353]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:05:24 np0005545273 python3.9[32476]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764842723.4979672-73-136369126107186/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:05:25 np0005545273 python3.9[32628]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:05:26 np0005545273 python3.9[32784]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:05:26 np0005545273 python3.9[32936]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:05:27 np0005545273 python3.9[33086]: ansible-ansible.builtin.service_facts Invoked
Dec  4 05:05:31 np0005545273 python3.9[33339]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:05:32 np0005545273 python3.9[33489]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:05:33 np0005545273 python3.9[33643]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:05:34 np0005545273 python3.9[33801]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  4 05:05:35 np0005545273 python3.9[33885]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  4 05:06:18 np0005545273 systemd[1]: Reloading.
Dec  4 05:06:18 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:06:18 np0005545273 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec  4 05:06:18 np0005545273 systemd[1]: Reloading.
Dec  4 05:06:18 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:06:19 np0005545273 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec  4 05:06:19 np0005545273 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec  4 05:06:19 np0005545273 systemd[1]: Reloading.
Dec  4 05:06:19 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:06:19 np0005545273 systemd[1]: Listening on LVM2 poll daemon socket.
Dec  4 05:06:19 np0005545273 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Dec  4 05:06:19 np0005545273 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Dec  4 05:07:37 np0005545273 kernel: SELinux:  Converting 2719 SID table entries...
Dec  4 05:07:37 np0005545273 kernel: SELinux:  policy capability network_peer_controls=1
Dec  4 05:07:37 np0005545273 kernel: SELinux:  policy capability open_perms=1
Dec  4 05:07:37 np0005545273 kernel: SELinux:  policy capability extended_socket_class=1
Dec  4 05:07:37 np0005545273 kernel: SELinux:  policy capability always_check_network=0
Dec  4 05:07:37 np0005545273 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  4 05:07:37 np0005545273 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  4 05:07:37 np0005545273 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  4 05:07:37 np0005545273 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec  4 05:07:37 np0005545273 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  4 05:07:37 np0005545273 systemd[1]: Starting man-db-cache-update.service...
Dec  4 05:07:37 np0005545273 systemd[1]: Reloading.
Dec  4 05:07:37 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:07:38 np0005545273 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  4 05:07:39 np0005545273 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  4 05:07:39 np0005545273 systemd[1]: Finished man-db-cache-update.service.
Dec  4 05:07:39 np0005545273 systemd[1]: man-db-cache-update.service: Consumed 1.322s CPU time.
Dec  4 05:07:39 np0005545273 systemd[1]: run-r1e00bc08aa7848808495c1f46f230129.service: Deactivated successfully.
Dec  4 05:07:39 np0005545273 python3.9[35460]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:07:42 np0005545273 python3.9[35741]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec  4 05:07:42 np0005545273 python3.9[35893]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec  4 05:07:46 np0005545273 python3.9[36047]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:07:47 np0005545273 python3.9[36199]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec  4 05:07:49 np0005545273 python3.9[36351]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:07:49 np0005545273 python3.9[36503]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:07:50 np0005545273 python3.9[36626]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764842869.1995506-236-211587129898094/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=30ac9e0c3193352f9a52990ef0ec51829bcb5137 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:07:51 np0005545273 python3.9[36778]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:07:51 np0005545273 python3.9[36930]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:07:52 np0005545273 python3.9[37083]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:07:53 np0005545273 python3.9[37235]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec  4 05:07:53 np0005545273 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  4 05:07:53 np0005545273 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  4 05:07:58 np0005545273 python3.9[37389]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  4 05:07:59 np0005545273 python3.9[37547]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  4 05:08:00 np0005545273 python3.9[37707]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec  4 05:08:01 np0005545273 python3.9[37860]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  4 05:08:01 np0005545273 python3.9[38020]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec  4 05:08:02 np0005545273 python3.9[38172]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  4 05:08:08 np0005545273 python3.9[38325]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:08:09 np0005545273 python3.9[38477]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:08:09 np0005545273 python3.9[38600]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764842888.8522809-355-164129890382118/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:08:11 np0005545273 python3.9[38752]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  4 05:08:11 np0005545273 systemd[1]: Starting Load Kernel Modules...
Dec  4 05:08:11 np0005545273 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec  4 05:08:11 np0005545273 kernel: Bridge firewalling registered
Dec  4 05:08:11 np0005545273 systemd-modules-load[38756]: Inserted module 'br_netfilter'
Dec  4 05:08:11 np0005545273 systemd[1]: Finished Load Kernel Modules.
Dec  4 05:08:11 np0005545273 python3.9[38912]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:08:12 np0005545273 python3.9[39035]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764842891.4844732-378-209882278444776/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:08:13 np0005545273 python3.9[39187]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  4 05:08:25 np0005545273 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Dec  4 05:08:25 np0005545273 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Dec  4 05:08:26 np0005545273 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  4 05:08:26 np0005545273 systemd[1]: Starting man-db-cache-update.service...
Dec  4 05:08:26 np0005545273 systemd[1]: Reloading.
Dec  4 05:08:26 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:08:26 np0005545273 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  4 05:08:30 np0005545273 python3.9[41220]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:08:31 np0005545273 python3.9[42183]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec  4 05:08:31 np0005545273 python3.9[42856]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:08:32 np0005545273 python3.9[43402]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:08:32 np0005545273 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  4 05:08:33 np0005545273 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  4 05:08:33 np0005545273 systemd[1]: Finished man-db-cache-update.service.
Dec  4 05:08:33 np0005545273 systemd[1]: man-db-cache-update.service: Consumed 5.569s CPU time.
Dec  4 05:08:33 np0005545273 systemd[1]: run-r44a3b065544d4269a3621b4d4ff8ccc5.service: Deactivated successfully.
Dec  4 05:08:33 np0005545273 systemd[1]: Starting Authorization Manager...
Dec  4 05:08:33 np0005545273 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  4 05:08:33 np0005545273 polkitd[43629]: Started polkitd version 0.117
Dec  4 05:08:33 np0005545273 systemd[1]: Started Authorization Manager.
Dec  4 05:08:34 np0005545273 python3.9[43799]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:08:35 np0005545273 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec  4 05:08:35 np0005545273 systemd[1]: tuned.service: Deactivated successfully.
Dec  4 05:08:35 np0005545273 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec  4 05:08:35 np0005545273 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  4 05:08:35 np0005545273 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  4 05:08:36 np0005545273 python3.9[43961]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec  4 05:08:39 np0005545273 python3.9[44113]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:08:39 np0005545273 systemd[1]: Reloading.
Dec  4 05:08:39 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:08:40 np0005545273 python3.9[44302]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:08:40 np0005545273 systemd[1]: Reloading.
Dec  4 05:08:40 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:08:41 np0005545273 python3.9[44491]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:08:42 np0005545273 python3.9[44644]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:08:42 np0005545273 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Dec  4 05:08:42 np0005545273 python3.9[44799]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:08:44 np0005545273 python3.9[44961]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:08:45 np0005545273 python3.9[45114]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  4 05:08:45 np0005545273 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec  4 05:08:45 np0005545273 systemd[1]: Stopped Apply Kernel Variables.
Dec  4 05:08:45 np0005545273 systemd[1]: Stopping Apply Kernel Variables...
Dec  4 05:08:45 np0005545273 systemd[1]: Starting Apply Kernel Variables...
Dec  4 05:08:45 np0005545273 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec  4 05:08:45 np0005545273 systemd[1]: Finished Apply Kernel Variables.
Dec  4 05:08:46 np0005545273 systemd-logind[798]: Session 9 logged out. Waiting for processes to exit.
Dec  4 05:08:46 np0005545273 systemd[1]: session-9.scope: Deactivated successfully.
Dec  4 05:08:46 np0005545273 systemd[1]: session-9.scope: Consumed 2min 18.039s CPU time.
Dec  4 05:08:46 np0005545273 systemd-logind[798]: Removed session 9.
Dec  4 05:08:52 np0005545273 systemd-logind[798]: New session 10 of user zuul.
Dec  4 05:08:52 np0005545273 systemd[1]: Started Session 10 of User zuul.
Dec  4 05:08:53 np0005545273 python3.9[45299]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:08:54 np0005545273 python3.9[45455]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec  4 05:08:55 np0005545273 python3.9[45608]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  4 05:08:56 np0005545273 python3.9[45766]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  4 05:08:57 np0005545273 python3.9[45926]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  4 05:08:58 np0005545273 python3.9[46010]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  4 05:09:01 np0005545273 python3.9[46178]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  4 05:09:14 np0005545273 kernel: SELinux:  Converting 2731 SID table entries...
Dec  4 05:09:14 np0005545273 kernel: SELinux:  policy capability network_peer_controls=1
Dec  4 05:09:14 np0005545273 kernel: SELinux:  policy capability open_perms=1
Dec  4 05:09:14 np0005545273 kernel: SELinux:  policy capability extended_socket_class=1
Dec  4 05:09:14 np0005545273 kernel: SELinux:  policy capability always_check_network=0
Dec  4 05:09:14 np0005545273 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  4 05:09:14 np0005545273 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  4 05:09:14 np0005545273 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  4 05:09:14 np0005545273 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec  4 05:09:14 np0005545273 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec  4 05:09:16 np0005545273 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  4 05:09:16 np0005545273 systemd[1]: Starting man-db-cache-update.service...
Dec  4 05:09:16 np0005545273 systemd[1]: Reloading.
Dec  4 05:09:16 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:09:16 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:09:16 np0005545273 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  4 05:09:17 np0005545273 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  4 05:09:17 np0005545273 systemd[1]: Finished man-db-cache-update.service.
Dec  4 05:09:17 np0005545273 systemd[1]: run-r0724c7fe2f3d4c34a419b6a43cd366d1.service: Deactivated successfully.
Dec  4 05:09:18 np0005545273 python3.9[47283]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  4 05:09:18 np0005545273 systemd[1]: Reloading.
Dec  4 05:09:18 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:09:18 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:09:18 np0005545273 systemd[1]: Starting Open vSwitch Database Unit...
Dec  4 05:09:18 np0005545273 chown[47326]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec  4 05:09:18 np0005545273 ovs-ctl[47331]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec  4 05:09:19 np0005545273 ovs-ctl[47331]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec  4 05:09:19 np0005545273 ovs-ctl[47331]: Starting ovsdb-server [  OK  ]
Dec  4 05:09:19 np0005545273 ovs-vsctl[47380]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec  4 05:09:19 np0005545273 ovs-vsctl[47400]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"565580d5-3422-4e11-b563-3f1a3db67238\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Dec  4 05:09:19 np0005545273 ovs-ctl[47331]: Configuring Open vSwitch system IDs [  OK  ]
Dec  4 05:09:19 np0005545273 ovs-ctl[47331]: Enabling remote OVSDB managers [  OK  ]
Dec  4 05:09:19 np0005545273 systemd[1]: Started Open vSwitch Database Unit.
Dec  4 05:09:19 np0005545273 ovs-vsctl[47406]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec  4 05:09:19 np0005545273 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec  4 05:09:19 np0005545273 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec  4 05:09:19 np0005545273 systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec  4 05:09:19 np0005545273 kernel: openvswitch: Open vSwitch switching datapath
Dec  4 05:09:19 np0005545273 ovs-ctl[47450]: Inserting openvswitch module [  OK  ]
Dec  4 05:09:19 np0005545273 ovs-ctl[47419]: Starting ovs-vswitchd [  OK  ]
Dec  4 05:09:19 np0005545273 ovs-vsctl[47467]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec  4 05:09:19 np0005545273 ovs-ctl[47419]: Enabling remote OVSDB managers [  OK  ]
Dec  4 05:09:19 np0005545273 systemd[1]: Started Open vSwitch Forwarding Unit.
Dec  4 05:09:19 np0005545273 systemd[1]: Starting Open vSwitch...
Dec  4 05:09:19 np0005545273 systemd[1]: Finished Open vSwitch.
Dec  4 05:09:20 np0005545273 python3.9[47619]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:09:21 np0005545273 python3.9[47771]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec  4 05:09:22 np0005545273 kernel: SELinux:  Converting 2745 SID table entries...
Dec  4 05:09:22 np0005545273 kernel: SELinux:  policy capability network_peer_controls=1
Dec  4 05:09:22 np0005545273 kernel: SELinux:  policy capability open_perms=1
Dec  4 05:09:22 np0005545273 kernel: SELinux:  policy capability extended_socket_class=1
Dec  4 05:09:22 np0005545273 kernel: SELinux:  policy capability always_check_network=0
Dec  4 05:09:22 np0005545273 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  4 05:09:22 np0005545273 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  4 05:09:22 np0005545273 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  4 05:09:24 np0005545273 python3.9[47927]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:09:24 np0005545273 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec  4 05:09:25 np0005545273 python3.9[48086]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  4 05:09:27 np0005545273 python3.9[48239]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:09:29 np0005545273 python3.9[48526]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  4 05:09:29 np0005545273 python3.9[48676]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:09:30 np0005545273 python3.9[48830]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  4 05:09:33 np0005545273 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  4 05:09:33 np0005545273 systemd[1]: Starting man-db-cache-update.service...
Dec  4 05:09:33 np0005545273 systemd[1]: Reloading.
Dec  4 05:09:33 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:09:33 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:09:33 np0005545273 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  4 05:09:33 np0005545273 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  4 05:09:33 np0005545273 systemd[1]: Finished man-db-cache-update.service.
Dec  4 05:09:33 np0005545273 systemd[1]: run-r28b6149121b6431190d37f343739788b.service: Deactivated successfully.
Dec  4 05:09:34 np0005545273 python3.9[49150]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  4 05:09:34 np0005545273 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec  4 05:09:34 np0005545273 systemd[1]: Stopped Network Manager Wait Online.
Dec  4 05:09:34 np0005545273 systemd[1]: Stopping Network Manager Wait Online...
Dec  4 05:09:34 np0005545273 NetworkManager[7184]: <info>  [1764842974.7492] caught SIGTERM, shutting down normally.
Dec  4 05:09:34 np0005545273 NetworkManager[7184]: <info>  [1764842974.7508] dhcp4 (eth0): canceled DHCP transaction
Dec  4 05:09:34 np0005545273 NetworkManager[7184]: <info>  [1764842974.7508] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  4 05:09:34 np0005545273 NetworkManager[7184]: <info>  [1764842974.7508] dhcp4 (eth0): state changed no lease
Dec  4 05:09:34 np0005545273 NetworkManager[7184]: <info>  [1764842974.7512] manager: NetworkManager state is now CONNECTED_SITE
Dec  4 05:09:34 np0005545273 systemd[1]: Stopping Network Manager...
Dec  4 05:09:34 np0005545273 NetworkManager[7184]: <info>  [1764842974.7606] exiting (success)
Dec  4 05:09:34 np0005545273 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  4 05:09:34 np0005545273 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec  4 05:09:34 np0005545273 systemd[1]: Stopped Network Manager.
Dec  4 05:09:34 np0005545273 systemd[1]: NetworkManager.service: Consumed 14.353s CPU time, 4.1M memory peak, read 0B from disk, written 34.0K to disk.
Dec  4 05:09:34 np0005545273 systemd[1]: Starting Network Manager...
Dec  4 05:09:34 np0005545273 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.8348] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:df4fb9d0-81a4-4e5e-8b88-c0920d7ba5e9)
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.8350] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.8421] manager[0x55fd3257b090]: monitoring kernel firmware directory '/lib/firmware'.
Dec  4 05:09:34 np0005545273 systemd[1]: Starting Hostname Service...
Dec  4 05:09:34 np0005545273 systemd[1]: Started Hostname Service.
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9309] hostname: hostname: using hostnamed
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9311] hostname: static hostname changed from (none) to "compute-0"
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9316] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9321] manager[0x55fd3257b090]: rfkill: Wi-Fi hardware radio set enabled
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9321] manager[0x55fd3257b090]: rfkill: WWAN hardware radio set enabled
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9345] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9356] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9357] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9358] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9358] manager: Networking is enabled by state file
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9361] settings: Loaded settings plugin: keyfile (internal)
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9366] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9394] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9401] dhcp: init: Using DHCP client 'internal'
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9403] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9407] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9410] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9416] device (lo): Activation: starting connection 'lo' (3cd632aa-e4f7-4e63-bb4d-c1d9ec185b32)
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9421] device (eth0): carrier: link connected
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9424] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9427] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9427] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9431] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9436] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9440] device (eth1): carrier: link connected
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9443] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9446] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (92b9209e-aa34-525f-93ad-a8f9725aec53) (indicated)
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9446] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9449] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9454] device (eth1): Activation: starting connection 'ci-private-network' (92b9209e-aa34-525f-93ad-a8f9725aec53)
Dec  4 05:09:34 np0005545273 systemd[1]: Started Network Manager.
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9458] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9463] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9465] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9466] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9468] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9470] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9473] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9475] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9478] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9483] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9486] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9494] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9506] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9515] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9516] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9519] device (lo): Activation: successful, device activated.
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9524] dhcp4 (eth0): state changed new lease, address=38.102.83.169
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9528] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9599] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  4 05:09:34 np0005545273 systemd[1]: Starting Network Manager Wait Online...
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9603] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9604] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9606] manager: NetworkManager state is now CONNECTED_LOCAL
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9608] device (eth1): Activation: successful, device activated.
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9615] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9616] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9618] manager: NetworkManager state is now CONNECTED_SITE
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9620] device (eth0): Activation: successful, device activated.
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9624] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  4 05:09:34 np0005545273 NetworkManager[49155]: <info>  [1764842974.9626] manager: startup complete
Dec  4 05:09:34 np0005545273 systemd[1]: Finished Network Manager Wait Online.
Dec  4 05:09:35 np0005545273 python3.9[49376]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  4 05:09:42 np0005545273 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  4 05:09:42 np0005545273 systemd[1]: Starting man-db-cache-update.service...
Dec  4 05:09:42 np0005545273 systemd[1]: Reloading.
Dec  4 05:09:42 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:09:42 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:09:42 np0005545273 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  4 05:09:43 np0005545273 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  4 05:09:43 np0005545273 systemd[1]: Finished man-db-cache-update.service.
Dec  4 05:09:43 np0005545273 systemd[1]: run-re1c0ad4d9715485d9f2b6b42f6a21cf0.service: Deactivated successfully.
Dec  4 05:09:44 np0005545273 python3.9[49833]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:09:45 np0005545273 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  4 05:09:45 np0005545273 python3.9[49985]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:09:46 np0005545273 python3.9[50139]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:09:46 np0005545273 python3.9[50291]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:09:47 np0005545273 python3.9[50443]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:09:47 np0005545273 python3.9[50595]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:09:48 np0005545273 python3.9[50747]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:09:49 np0005545273 python3.9[50870]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764842988.050382-229-17260998974331/.source _original_basename=.499q43_d follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:09:49 np0005545273 python3.9[51022]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:09:50 np0005545273 python3.9[51176]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec  4 05:09:51 np0005545273 python3.9[51328]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:09:53 np0005545273 python3.9[51755]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec  4 05:09:54 np0005545273 ansible-async_wrapper.py[51930]: Invoked with j993236775427 300 /home/zuul/.ansible/tmp/ansible-tmp-1764842993.616241-295-139400037214218/AnsiballZ_edpm_os_net_config.py _
Dec  4 05:09:54 np0005545273 ansible-async_wrapper.py[51933]: Starting module and watcher
Dec  4 05:09:54 np0005545273 ansible-async_wrapper.py[51933]: Start watching 51934 (300)
Dec  4 05:09:54 np0005545273 ansible-async_wrapper.py[51934]: Start module (51934)
Dec  4 05:09:54 np0005545273 ansible-async_wrapper.py[51930]: Return async_wrapper task started.
Dec  4 05:09:54 np0005545273 python3.9[51935]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Dec  4 05:09:55 np0005545273 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec  4 05:09:55 np0005545273 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec  4 05:09:55 np0005545273 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Dec  4 05:09:55 np0005545273 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec  4 05:09:55 np0005545273 kernel: cfg80211: failed to load regulatory.db
Dec  4 05:09:56 np0005545273 NetworkManager[49155]: <info>  [1764842996.6019] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51936 uid=0 result="success"
Dec  4 05:09:56 np0005545273 NetworkManager[49155]: <info>  [1764842996.6036] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51936 uid=0 result="success"
Dec  4 05:09:56 np0005545273 NetworkManager[49155]: <info>  [1764842996.6637] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Dec  4 05:09:56 np0005545273 NetworkManager[49155]: <info>  [1764842996.6638] audit: op="connection-add" uuid="a15c4f20-e55d-495f-8cf8-1789ffb767fc" name="br-ex-br" pid=51936 uid=0 result="success"
Dec  4 05:09:56 np0005545273 NetworkManager[49155]: <info>  [1764842996.6653] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Dec  4 05:09:56 np0005545273 NetworkManager[49155]: <info>  [1764842996.6654] audit: op="connection-add" uuid="3501e357-6a24-4589-b09a-4e45df7b9f1e" name="br-ex-port" pid=51936 uid=0 result="success"
Dec  4 05:09:56 np0005545273 NetworkManager[49155]: <info>  [1764842996.6665] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Dec  4 05:09:56 np0005545273 NetworkManager[49155]: <info>  [1764842996.6665] audit: op="connection-add" uuid="f157efaf-88c1-498e-b7be-9797351e9cc5" name="eth1-port" pid=51936 uid=0 result="success"
Dec  4 05:09:56 np0005545273 NetworkManager[49155]: <info>  [1764842996.6676] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Dec  4 05:09:56 np0005545273 NetworkManager[49155]: <info>  [1764842996.6677] audit: op="connection-add" uuid="7943381a-aa3c-4448-9073-3da4ad63fbc2" name="vlan20-port" pid=51936 uid=0 result="success"
Dec  4 05:09:56 np0005545273 NetworkManager[49155]: <info>  [1764842996.6688] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Dec  4 05:09:56 np0005545273 NetworkManager[49155]: <info>  [1764842996.6688] audit: op="connection-add" uuid="2a86e840-97a9-4eb1-a01a-66b8ba93f9a7" name="vlan21-port" pid=51936 uid=0 result="success"
Dec  4 05:09:56 np0005545273 NetworkManager[49155]: <info>  [1764842996.6699] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Dec  4 05:09:56 np0005545273 NetworkManager[49155]: <info>  [1764842996.6700] audit: op="connection-add" uuid="5a561042-8249-41c1-8751-d90cd23df0d5" name="vlan22-port" pid=51936 uid=0 result="success"
Dec  4 05:09:56 np0005545273 NetworkManager[49155]: <info>  [1764842996.6709] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Dec  4 05:09:56 np0005545273 NetworkManager[49155]: <info>  [1764842996.6710] audit: op="connection-add" uuid="51befe0a-9ddc-4531-a14f-c5a733b7f996" name="vlan23-port" pid=51936 uid=0 result="success"
Dec  4 05:09:56 np0005545273 NetworkManager[49155]: <info>  [1764842996.6728] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.timestamp,connection.autoconnect-priority,ipv6.addr-gen-mode,ipv6.method,ipv6.dhcp-timeout,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51936 uid=0 result="success"
Dec  4 05:09:56 np0005545273 NetworkManager[49155]: <info>  [1764842996.6743] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Dec  4 05:09:56 np0005545273 NetworkManager[49155]: <info>  [1764842996.6744] audit: op="connection-add" uuid="fdd7d824-308c-4d54-bf4e-3d18073b3936" name="br-ex-if" pid=51936 uid=0 result="success"
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0484] audit: op="connection-update" uuid="92b9209e-aa34-525f-93ad-a8f9725aec53" name="ci-private-network" args="connection.slave-type,connection.controller,connection.timestamp,connection.port-type,connection.master,ipv6.addr-gen-mode,ipv6.dns,ipv6.routes,ipv6.addresses,ipv6.method,ipv6.routing-rules,ovs-external-ids.data,ovs-interface.type,ipv4.never-default,ipv4.dns,ipv4.routes,ipv4.addresses,ipv4.method,ipv4.routing-rules" pid=51936 uid=0 result="success"
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0532] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0535] audit: op="connection-add" uuid="8d789710-985c-461b-8da6-c676763589e9" name="vlan20-if" pid=51936 uid=0 result="success"
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0567] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0570] audit: op="connection-add" uuid="c1a6f5fd-fd90-4bc6-a5cb-00fee3cf0eb8" name="vlan21-if" pid=51936 uid=0 result="success"
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0601] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0604] audit: op="connection-add" uuid="ec1ba5fd-bc71-4544-a1a1-a6126c1edb02" name="vlan22-if" pid=51936 uid=0 result="success"
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0635] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0638] audit: op="connection-add" uuid="c98a02e4-2cf8-4ebc-8cf9-de679df99c1c" name="vlan23-if" pid=51936 uid=0 result="success"
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0664] audit: op="connection-delete" uuid="e28c0e0c-6ca0-32c5-afa3-1d5d772b4e93" name="Wired connection 1" pid=51936 uid=0 result="success"
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0687] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0704] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0712] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (a15c4f20-e55d-495f-8cf8-1789ffb767fc)
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0713] audit: op="connection-activate" uuid="a15c4f20-e55d-495f-8cf8-1789ffb767fc" name="br-ex-br" pid=51936 uid=0 result="success"
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0717] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0730] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0736] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (3501e357-6a24-4589-b09a-4e45df7b9f1e)
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0740] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0751] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0758] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (f157efaf-88c1-498e-b7be-9797351e9cc5)
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0762] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0775] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0782] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (7943381a-aa3c-4448-9073-3da4ad63fbc2)
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0785] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0798] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0805] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (2a86e840-97a9-4eb1-a01a-66b8ba93f9a7)
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0809] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0819] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0826] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (5a561042-8249-41c1-8751-d90cd23df0d5)
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0830] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0840] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0847] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (51befe0a-9ddc-4531-a14f-c5a733b7f996)
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0849] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0855] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0858] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0869] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0877] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0884] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (fdd7d824-308c-4d54-bf4e-3d18073b3936)
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0886] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0892] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0895] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0898] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0900] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0918] device (eth1): disconnecting for new activation request.
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0919] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0925] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0928] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0932] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0937] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0947] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0954] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (8d789710-985c-461b-8da6-c676763589e9)
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0955] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0960] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0964] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0967] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0973] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0980] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0987] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (c1a6f5fd-fd90-4bc6-a5cb-00fee3cf0eb8)
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0988] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0993] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0997] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.0999] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1005] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1015] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1024] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (ec1ba5fd-bc71-4544-a1a1-a6126c1edb02)
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1025] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1031] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1035] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1037] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1044] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1052] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1056] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (c98a02e4-2cf8-4ebc-8cf9-de679df99c1c)
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1057] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1059] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1061] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1062] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1063] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1074] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51936 uid=0 result="success"
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1075] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1078] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1079] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1086] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1089] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1103] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1109] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 kernel: ovs-system: entered promiscuous mode
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1113] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1123] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 kernel: Timeout policy base is empty
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1132] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1139] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 systemd-udevd[51940]: Network interface NamePolicy= disabled on kernel command line.
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1143] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1153] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1160] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1167] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1170] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1179] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1189] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1197] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1203] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1227] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1235] dhcp4 (eth0): canceled DHCP transaction
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1237] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1239] dhcp4 (eth0): state changed no lease
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1243] dhcp4 (eth0): activation: beginning transaction (no timeout)
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1262] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.1269] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51936 uid=0 result="fail" reason="Device is not activated"
Dec  4 05:09:57 np0005545273 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  4 05:09:57 np0005545273 kernel: br-ex: entered promiscuous mode
Dec  4 05:09:57 np0005545273 kernel: vlan20: entered promiscuous mode
Dec  4 05:09:57 np0005545273 systemd-udevd[51941]: Network interface NamePolicy= disabled on kernel command line.
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.7174] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.7186] dhcp4 (eth0): state changed new lease, address=38.102.83.169
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.7210] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.7220] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.7232] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Dec  4 05:09:57 np0005545273 NetworkManager[49155]: <info>  [1764842997.7241] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Dec  4 05:09:57 np0005545273 kernel: vlan21: entered promiscuous mode
Dec  4 05:09:57 np0005545273 kernel: vlan22: entered promiscuous mode
Dec  4 05:09:57 np0005545273 systemd-udevd[51942]: Network interface NamePolicy= disabled on kernel command line.
Dec  4 05:09:57 np0005545273 kernel: vlan23: entered promiscuous mode
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.0830] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.0918] device (eth1): Activation: starting connection 'ci-private-network' (92b9209e-aa34-525f-93ad-a8f9725aec53)
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.0922] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.0923] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.0924] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.0925] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.0926] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.0927] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.0928] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.0931] device (eth1): state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.0937] device (eth1): disconnecting for new activation request.
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.0938] audit: op="connection-activate" uuid="92b9209e-aa34-525f-93ad-a8f9725aec53" name="ci-private-network" pid=51936 uid=0 result="success"
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.0941] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.0962] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.0968] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.0974] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.0976] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.0980] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.0990] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.0994] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.0997] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1000] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1004] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1007] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1010] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1015] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1018] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1021] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1024] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1028] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1031] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1037] device (eth1): Activation: starting connection 'ci-private-network' (92b9209e-aa34-525f-93ad-a8f9725aec53)
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1078] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1082] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1089] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51936 uid=0 result="success"
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1094] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1116] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1125] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1133] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  4 05:09:58 np0005545273 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1154] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1157] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1161] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1169] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1179] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1187] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1197] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1202] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1203] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1205] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1207] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1209] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1216] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1222] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1228] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1235] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1241] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1253] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1258] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1266] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1267] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  4 05:09:58 np0005545273 NetworkManager[49155]: <info>  [1764842998.1272] device (eth1): Activation: successful, device activated.
Dec  4 05:09:58 np0005545273 python3.9[52285]: ansible-ansible.legacy.async_status Invoked with jid=j993236775427.51930 mode=status _async_dir=/root/.ansible_async
Dec  4 05:09:59 np0005545273 NetworkManager[49155]: <info>  [1764842999.4438] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51936 uid=0 result="success"
Dec  4 05:09:59 np0005545273 ansible-async_wrapper.py[51933]: 51934 still running (300)
Dec  4 05:09:59 np0005545273 NetworkManager[49155]: <info>  [1764842999.6143] checkpoint[0x55fd32550950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Dec  4 05:09:59 np0005545273 NetworkManager[49155]: <info>  [1764842999.6146] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51936 uid=0 result="success"
Dec  4 05:09:59 np0005545273 NetworkManager[49155]: <info>  [1764842999.9389] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51936 uid=0 result="success"
Dec  4 05:09:59 np0005545273 NetworkManager[49155]: <info>  [1764842999.9400] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51936 uid=0 result="success"
Dec  4 05:10:00 np0005545273 NetworkManager[49155]: <info>  [1764843000.8123] audit: op="networking-control" arg="global-dns-configuration" pid=51936 uid=0 result="success"
Dec  4 05:10:01 np0005545273 NetworkManager[49155]: <info>  [1764843001.2269] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Dec  4 05:10:01 np0005545273 NetworkManager[49155]: <info>  [1764843001.2557] audit: op="networking-control" arg="global-dns-configuration" pid=51936 uid=0 result="success"
Dec  4 05:10:01 np0005545273 NetworkManager[49155]: <info>  [1764843001.2584] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51936 uid=0 result="success"
Dec  4 05:10:01 np0005545273 NetworkManager[49155]: <info>  [1764843001.4289] checkpoint[0x55fd32550a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Dec  4 05:10:01 np0005545273 NetworkManager[49155]: <info>  [1764843001.4293] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51936 uid=0 result="success"
Dec  4 05:10:01 np0005545273 ansible-async_wrapper.py[51934]: Module complete (51934)
Dec  4 05:10:02 np0005545273 python3.9[52405]: ansible-ansible.legacy.async_status Invoked with jid=j993236775427.51930 mode=status _async_dir=/root/.ansible_async
Dec  4 05:10:02 np0005545273 python3.9[52504]: ansible-ansible.legacy.async_status Invoked with jid=j993236775427.51930 mode=cleanup _async_dir=/root/.ansible_async
Dec  4 05:10:03 np0005545273 python3.9[52658]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:10:03 np0005545273 python3.9[52781]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764843002.7698042-322-165556734489432/.source.returncode _original_basename=.dlbnlgt7 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:10:04 np0005545273 ansible-async_wrapper.py[51933]: Done in kid B.
Dec  4 05:10:04 np0005545273 python3.9[52933]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:10:04 np0005545273 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  4 05:10:05 np0005545273 python3.9[53056]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764843004.035721-338-193265886416780/.source.cfg _original_basename=.9yo7dvgm follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:10:05 np0005545273 python3.9[53211]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  4 05:10:05 np0005545273 systemd[1]: Reloading Network Manager...
Dec  4 05:10:05 np0005545273 NetworkManager[49155]: <info>  [1764843005.9211] audit: op="reload" arg="0" pid=53215 uid=0 result="success"
Dec  4 05:10:05 np0005545273 NetworkManager[49155]: <info>  [1764843005.9221] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Dec  4 05:10:05 np0005545273 systemd[1]: Reloaded Network Manager.
Dec  4 05:10:06 np0005545273 systemd[1]: session-10.scope: Deactivated successfully.
Dec  4 05:10:06 np0005545273 systemd[1]: session-10.scope: Consumed 53.651s CPU time.
Dec  4 05:10:06 np0005545273 systemd-logind[798]: Session 10 logged out. Waiting for processes to exit.
Dec  4 05:10:06 np0005545273 systemd-logind[798]: Removed session 10.
Dec  4 05:10:11 np0005545273 systemd-logind[798]: New session 11 of user zuul.
Dec  4 05:10:11 np0005545273 systemd[1]: Started Session 11 of User zuul.
Dec  4 05:10:12 np0005545273 python3.9[53399]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:10:13 np0005545273 python3.9[53553]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  4 05:10:14 np0005545273 python3.9[53747]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:10:15 np0005545273 systemd[1]: session-11.scope: Deactivated successfully.
Dec  4 05:10:15 np0005545273 systemd[1]: session-11.scope: Consumed 2.433s CPU time.
Dec  4 05:10:15 np0005545273 systemd-logind[798]: Session 11 logged out. Waiting for processes to exit.
Dec  4 05:10:15 np0005545273 systemd-logind[798]: Removed session 11.
Dec  4 05:10:15 np0005545273 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  4 05:10:20 np0005545273 systemd-logind[798]: New session 12 of user zuul.
Dec  4 05:10:20 np0005545273 systemd[1]: Started Session 12 of User zuul.
Dec  4 05:10:21 np0005545273 python3.9[53930]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:10:22 np0005545273 python3.9[54084]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:10:23 np0005545273 python3.9[54240]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  4 05:10:24 np0005545273 python3.9[54325]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  4 05:10:26 np0005545273 python3.9[54478]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  4 05:10:27 np0005545273 python3.9[54674]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:10:28 np0005545273 python3.9[54826]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:10:28 np0005545273 systemd[1]: var-lib-containers-storage-overlay-compat3173492795-merged.mount: Deactivated successfully.
Dec  4 05:10:28 np0005545273 podman[54827]: 2025-12-04 10:10:28.585666882 +0000 UTC m=+0.054907193 system refresh
Dec  4 05:10:29 np0005545273 python3.9[54991]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:10:29 np0005545273 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  4 05:10:30 np0005545273 python3.9[55114]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843028.7938814-79-159028739508726/.source.json follow=False _original_basename=podman_network_config.j2 checksum=c842a32f0e5aeddf216d0e4b41b36c6a0454f7d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:10:30 np0005545273 python3.9[55266]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:10:31 np0005545273 python3.9[55389]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764843030.3612537-94-192471316061870/.source.conf follow=False _original_basename=registries.conf.j2 checksum=e054e42fc917865162376c34713b3d5516074d23 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:10:32 np0005545273 python3.9[55541]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:10:33 np0005545273 python3.9[55693]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:10:33 np0005545273 python3.9[55845]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:10:34 np0005545273 python3.9[55997]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:10:35 np0005545273 python3.9[56149]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  4 05:10:37 np0005545273 python3.9[56302]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:10:38 np0005545273 python3.9[56456]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:10:39 np0005545273 python3.9[56608]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:10:39 np0005545273 python3.9[56760]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:10:40 np0005545273 python3.9[56913]: ansible-service_facts Invoked
Dec  4 05:10:40 np0005545273 network[56930]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  4 05:10:40 np0005545273 network[56931]: 'network-scripts' will be removed from distribution in near future.
Dec  4 05:10:40 np0005545273 network[56932]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  4 05:10:45 np0005545273 python3.9[57386]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  4 05:10:48 np0005545273 python3.9[57541]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec  4 05:10:49 np0005545273 python3.9[57693]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:10:49 np0005545273 python3.9[57818]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764843048.8365653-238-103495330234340/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:10:50 np0005545273 python3.9[57972]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:10:51 np0005545273 python3.9[58097]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764843050.2730312-253-18079187276601/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:10:52 np0005545273 python3.9[58251]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:10:53 np0005545273 python3.9[58405]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  4 05:10:54 np0005545273 python3.9[58489]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:10:56 np0005545273 python3.9[58643]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  4 05:10:56 np0005545273 python3.9[58727]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  4 05:10:56 np0005545273 chronyd[791]: chronyd exiting
Dec  4 05:10:56 np0005545273 systemd[1]: Stopping NTP client/server...
Dec  4 05:10:56 np0005545273 systemd[1]: chronyd.service: Deactivated successfully.
Dec  4 05:10:56 np0005545273 systemd[1]: Stopped NTP client/server.
Dec  4 05:10:56 np0005545273 systemd[1]: Starting NTP client/server...
Dec  4 05:10:56 np0005545273 chronyd[58735]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec  4 05:10:56 np0005545273 chronyd[58735]: Frequency -23.612 +/- 0.226 ppm read from /var/lib/chrony/drift
Dec  4 05:10:56 np0005545273 chronyd[58735]: Loaded seccomp filter (level 2)
Dec  4 05:10:56 np0005545273 systemd[1]: Started NTP client/server.
Dec  4 05:10:57 np0005545273 systemd[1]: session-12.scope: Deactivated successfully.
Dec  4 05:10:57 np0005545273 systemd[1]: session-12.scope: Consumed 26.891s CPU time.
Dec  4 05:10:57 np0005545273 systemd-logind[798]: Session 12 logged out. Waiting for processes to exit.
Dec  4 05:10:57 np0005545273 systemd-logind[798]: Removed session 12.
Dec  4 05:11:03 np0005545273 systemd-logind[798]: New session 13 of user zuul.
Dec  4 05:11:03 np0005545273 systemd[1]: Started Session 13 of User zuul.
Dec  4 05:11:03 np0005545273 python3.9[58916]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:11:04 np0005545273 python3.9[59068]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:11:05 np0005545273 python3.9[59191]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764843064.0934138-34-200053414231995/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:11:05 np0005545273 systemd[1]: session-13.scope: Deactivated successfully.
Dec  4 05:11:05 np0005545273 systemd[1]: session-13.scope: Consumed 1.858s CPU time.
Dec  4 05:11:05 np0005545273 systemd-logind[798]: Session 13 logged out. Waiting for processes to exit.
Dec  4 05:11:05 np0005545273 systemd-logind[798]: Removed session 13.
Dec  4 05:11:11 np0005545273 systemd-logind[798]: New session 14 of user zuul.
Dec  4 05:11:11 np0005545273 systemd[1]: Started Session 14 of User zuul.
Dec  4 05:11:12 np0005545273 python3.9[59371]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:11:13 np0005545273 python3.9[59527]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:11:14 np0005545273 python3.9[59702]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:11:15 np0005545273 python3.9[59825]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764843073.536437-41-137303953496244/.source.json _original_basename=.mw3z_fpc follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:11:15 np0005545273 python3.9[59977]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:11:16 np0005545273 python3.9[60100]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764843075.4072232-64-230044537283121/.source _original_basename=.xjul2cfx follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:11:17 np0005545273 python3.9[60252]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:11:17 np0005545273 python3.9[60404]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:11:18 np0005545273 python3.9[60527]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764843077.2852218-88-257198964038621/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:11:18 np0005545273 python3.9[60679]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:11:19 np0005545273 python3.9[60802]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764843078.5008197-88-43318723721941/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:11:20 np0005545273 python3.9[60954]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:11:20 np0005545273 python3.9[61106]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:11:21 np0005545273 python3.9[61229]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843080.480024-125-43068505848326/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:11:22 np0005545273 python3.9[61381]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:11:22 np0005545273 python3.9[61504]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843081.7465773-140-53644912381580/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:11:23 np0005545273 python3.9[61656]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:11:23 np0005545273 systemd[1]: Reloading.
Dec  4 05:11:24 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:11:24 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:11:24 np0005545273 systemd[1]: Reloading.
Dec  4 05:11:24 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:11:24 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:11:24 np0005545273 systemd[1]: Starting EDPM Container Shutdown...
Dec  4 05:11:24 np0005545273 systemd[1]: Finished EDPM Container Shutdown.
Dec  4 05:11:25 np0005545273 python3.9[61883]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:11:25 np0005545273 python3.9[62006]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843084.7788312-163-191154483260111/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:11:26 np0005545273 python3.9[62158]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:11:26 np0005545273 python3.9[62283]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843085.9530258-178-185672400826386/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:11:27 np0005545273 python3.9[62435]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:11:27 np0005545273 systemd[1]: Reloading.
Dec  4 05:11:27 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:11:27 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:11:27 np0005545273 systemd[1]: Reloading.
Dec  4 05:11:27 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:11:28 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:11:28 np0005545273 systemd[1]: Starting Create netns directory...
Dec  4 05:11:28 np0005545273 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  4 05:11:28 np0005545273 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  4 05:11:28 np0005545273 systemd[1]: Finished Create netns directory.
Dec  4 05:11:28 np0005545273 python3.9[62663]: ansible-ansible.builtin.service_facts Invoked
Dec  4 05:11:28 np0005545273 network[62680]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  4 05:11:28 np0005545273 network[62681]: 'network-scripts' will be removed from distribution in near future.
Dec  4 05:11:28 np0005545273 network[62682]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  4 05:11:33 np0005545273 python3.9[62944]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:11:33 np0005545273 systemd[1]: Reloading.
Dec  4 05:11:33 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:11:33 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:11:33 np0005545273 systemd[1]: Stopping IPv4 firewall with iptables...
Dec  4 05:11:33 np0005545273 iptables.init[62983]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Dec  4 05:11:34 np0005545273 iptables.init[62983]: iptables: Flushing firewall rules: [  OK  ]
Dec  4 05:11:34 np0005545273 systemd[1]: iptables.service: Deactivated successfully.
Dec  4 05:11:34 np0005545273 systemd[1]: Stopped IPv4 firewall with iptables.
Dec  4 05:11:34 np0005545273 python3.9[63179]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:11:35 np0005545273 python3.9[63333]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:11:35 np0005545273 systemd[1]: Reloading.
Dec  4 05:11:35 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:11:35 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:11:36 np0005545273 systemd[1]: Starting Netfilter Tables...
Dec  4 05:11:36 np0005545273 systemd[1]: Finished Netfilter Tables.
Dec  4 05:11:36 np0005545273 python3.9[63524]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:11:37 np0005545273 python3.9[63677]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:11:38 np0005545273 python3.9[63802]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764843097.3903086-247-100911318175779/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:11:39 np0005545273 python3.9[63955]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  4 05:11:39 np0005545273 systemd[1]: Reloading OpenSSH server daemon...
Dec  4 05:11:39 np0005545273 systemd[1]: Reloaded OpenSSH server daemon.
Dec  4 05:11:40 np0005545273 python3.9[64111]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:11:40 np0005545273 python3.9[64265]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:11:41 np0005545273 python3.9[64390]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843100.1752677-278-139040778712147/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:11:42 np0005545273 python3.9[64542]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  4 05:11:42 np0005545273 systemd[1]: Starting Time & Date Service...
Dec  4 05:11:42 np0005545273 systemd[1]: Started Time & Date Service.
Dec  4 05:11:44 np0005545273 python3.9[64698]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:11:44 np0005545273 python3.9[64850]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:11:45 np0005545273 python3.9[64973]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764843104.3732488-313-233308952657070/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:11:46 np0005545273 python3.9[65125]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:11:46 np0005545273 python3.9[65248]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764843105.6710856-328-239693436633773/.source.yaml _original_basename=.37rfxh0_ follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:11:47 np0005545273 python3.9[65400]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:11:48 np0005545273 python3.9[65523]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843106.9623334-343-237064595834742/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:11:48 np0005545273 python3.9[65675]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:11:49 np0005545273 python3.9[65828]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:11:50 np0005545273 python3[65981]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  4 05:11:51 np0005545273 python3.9[66133]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:11:51 np0005545273 python3.9[66256]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843110.6663737-382-239936606705847/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:11:52 np0005545273 python3.9[66408]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:11:52 np0005545273 python3.9[66531]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843111.9639623-397-185575031110288/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:11:53 np0005545273 python3.9[66683]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:11:54 np0005545273 python3.9[66806]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843113.185842-412-64939927306543/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:11:54 np0005545273 python3.9[66958]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:11:55 np0005545273 python3.9[67081]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843114.4844103-427-33259816022164/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:11:56 np0005545273 python3.9[67233]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:11:56 np0005545273 python3.9[67356]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843115.7152786-442-227082775057702/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:11:57 np0005545273 python3.9[67508]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:11:58 np0005545273 python3.9[67660]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:11:59 np0005545273 python3.9[67819]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:11:59 np0005545273 python3.9[67972]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:12:00 np0005545273 python3.9[68124]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:12:01 np0005545273 python3.9[68276]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  4 05:12:02 np0005545273 python3.9[68429]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  4 05:12:02 np0005545273 systemd[1]: session-14.scope: Deactivated successfully.
Dec  4 05:12:02 np0005545273 systemd[1]: session-14.scope: Consumed 38.089s CPU time.
Dec  4 05:12:02 np0005545273 systemd-logind[798]: Session 14 logged out. Waiting for processes to exit.
Dec  4 05:12:02 np0005545273 systemd-logind[798]: Removed session 14.
Dec  4 05:12:07 np0005545273 systemd-logind[798]: New session 15 of user zuul.
Dec  4 05:12:07 np0005545273 systemd[1]: Started Session 15 of User zuul.
Dec  4 05:12:08 np0005545273 python3.9[68613]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec  4 05:12:09 np0005545273 python3.9[68765]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:12:10 np0005545273 python3.9[68917]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:12:11 np0005545273 python3.9[69069]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBaDrGsfyH66GeTPneOf4P9cqhJJcxgP3bu0E7RAjEstx4o7NevlnfodrpsWI3GhJ5z8ru5yYrnT8gj6K/RfM5zjWXW+Ul4lDWJ1UnIBsqOM+qHdwpyOanGFwsD1SStOqDLQRPhop1d9LdePkBXvJSXJ80Mpcjwm1bfGwN/fJl8zLFWskfkIYThTGAzthtkHNPXQXTBX+VOKpcthU/qN5CP8Y/w/9w96vwq/0dHExjueOOk28BTWEQCwxPpkb1Wrd6hQ3KYnZye2JOZh3qqNaX44hPg8VLhv3agVerNv6vRiI2EbdHHYD2I5gXfV7bQGhRzhpFEZm2DfYLr5b8H1kG9ocx3KHW2+TctXCO2hCdJhjjuQQb033in90uXPuMsEEvmtCnc5vbJ5DKpgiaJysNZhmTkpKiJ4UVa6HeBh3riio7zeHc3bjI/1AD1cejpy6OEoWwk/X8ydA6bau1ApGvoHoEAXhlES4J/a6CUovnch+uMkircx8hJcYthuNhJIk=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDBhSkNncUNzxmzyjy22XSoHmC2WfRWk9PEzKRLlibq2#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBBeg0yEcOxT9ax0vZC/VGcWoLt2isE/U7UTL1uRpP8q51Um5h2uaP4tcFVGL1g6uXlC20O3SCTRskwpUg5sj6I=#012 create=True mode=0644 path=/tmp/ansible.o_vmo2hl state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:12:12 np0005545273 python3.9[69221]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.o_vmo2hl' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:12:12 np0005545273 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  4 05:12:13 np0005545273 python3.9[69378]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.o_vmo2hl state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:12:13 np0005545273 systemd[1]: session-15.scope: Deactivated successfully.
Dec  4 05:12:13 np0005545273 systemd[1]: session-15.scope: Consumed 3.806s CPU time.
Dec  4 05:12:13 np0005545273 systemd-logind[798]: Session 15 logged out. Waiting for processes to exit.
Dec  4 05:12:13 np0005545273 systemd-logind[798]: Removed session 15.
Dec  4 05:12:19 np0005545273 systemd-logind[798]: New session 16 of user zuul.
Dec  4 05:12:19 np0005545273 systemd[1]: Started Session 16 of User zuul.
Dec  4 05:12:20 np0005545273 python3.9[69559]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:12:21 np0005545273 python3.9[69715]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  4 05:12:22 np0005545273 python3.9[69869]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  4 05:12:23 np0005545273 python3.9[70022]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:12:24 np0005545273 python3.9[70175]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:12:25 np0005545273 python3.9[70329]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:12:25 np0005545273 python3.9[70484]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:12:26 np0005545273 systemd[1]: session-16.scope: Deactivated successfully.
Dec  4 05:12:26 np0005545273 systemd[1]: session-16.scope: Consumed 4.704s CPU time.
Dec  4 05:12:26 np0005545273 systemd-logind[798]: Session 16 logged out. Waiting for processes to exit.
Dec  4 05:12:26 np0005545273 systemd-logind[798]: Removed session 16.
Dec  4 05:12:32 np0005545273 systemd-logind[798]: New session 17 of user zuul.
Dec  4 05:12:32 np0005545273 systemd[1]: Started Session 17 of User zuul.
Dec  4 05:12:33 np0005545273 python3.9[70664]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:12:34 np0005545273 python3.9[70820]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  4 05:12:35 np0005545273 python3.9[70904]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  4 05:12:37 np0005545273 python3.9[71055]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:12:38 np0005545273 python3.9[71206]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  4 05:12:39 np0005545273 python3.9[71356]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:12:39 np0005545273 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  4 05:12:39 np0005545273 python3.9[71507]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:12:40 np0005545273 systemd-logind[798]: Session 17 logged out. Waiting for processes to exit.
Dec  4 05:12:40 np0005545273 systemd[1]: session-17.scope: Deactivated successfully.
Dec  4 05:12:40 np0005545273 systemd[1]: session-17.scope: Consumed 6.057s CPU time.
Dec  4 05:12:40 np0005545273 systemd-logind[798]: Removed session 17.
Dec  4 05:12:49 np0005545273 systemd-logind[798]: New session 18 of user zuul.
Dec  4 05:12:49 np0005545273 systemd[1]: Started Session 18 of User zuul.
Dec  4 05:12:55 np0005545273 python3[72279]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:12:57 np0005545273 python3[72374]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  4 05:12:58 np0005545273 python3[72401]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  4 05:12:59 np0005545273 python3[72427]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:12:59 np0005545273 kernel: loop: module loaded
Dec  4 05:12:59 np0005545273 kernel: loop3: detected capacity change from 0 to 41943040
Dec  4 05:12:59 np0005545273 python3[72462]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:12:59 np0005545273 lvm[72465]: PV /dev/loop3 not used.
Dec  4 05:12:59 np0005545273 lvm[72474]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:12:59 np0005545273 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Dec  4 05:13:00 np0005545273 lvm[72476]:  1 logical volume(s) in volume group "ceph_vg0" now active
Dec  4 05:13:00 np0005545273 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Dec  4 05:13:00 np0005545273 python3[72554]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 05:13:00 np0005545273 python3[72627]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764843180.1285412-36129-216918035400012/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:13:01 np0005545273 python3[72677]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:13:01 np0005545273 systemd[1]: Reloading.
Dec  4 05:13:01 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:13:01 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:13:01 np0005545273 systemd[1]: Starting Ceph OSD losetup...
Dec  4 05:13:01 np0005545273 bash[72717]: /dev/loop3: [64513]:4327949 (/var/lib/ceph-osd-0.img)
Dec  4 05:13:01 np0005545273 systemd[1]: Finished Ceph OSD losetup.
Dec  4 05:13:01 np0005545273 lvm[72718]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:13:02 np0005545273 lvm[72718]: VG ceph_vg0 finished
Dec  4 05:13:02 np0005545273 python3[72744]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  4 05:13:04 np0005545273 python3[72771]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  4 05:13:04 np0005545273 python3[72797]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:13:04 np0005545273 kernel: loop4: detected capacity change from 0 to 41943040
Dec  4 05:13:04 np0005545273 python3[72829]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:13:04 np0005545273 lvm[72832]: PV /dev/loop4 not used.
Dec  4 05:13:05 np0005545273 lvm[72841]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:13:05 np0005545273 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Dec  4 05:13:05 np0005545273 lvm[72843]:  1 logical volume(s) in volume group "ceph_vg1" now active
Dec  4 05:13:05 np0005545273 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Dec  4 05:13:05 np0005545273 python3[72922]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 05:13:05 np0005545273 python3[72995]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764843185.2448385-36156-259620473456338/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:13:06 np0005545273 python3[73045]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:13:06 np0005545273 systemd[1]: Reloading.
Dec  4 05:13:06 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:13:06 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:13:06 np0005545273 systemd[1]: Starting Ceph OSD losetup...
Dec  4 05:13:06 np0005545273 bash[73088]: /dev/loop4: [64513]:4327955 (/var/lib/ceph-osd-1.img)
Dec  4 05:13:06 np0005545273 systemd[1]: Finished Ceph OSD losetup.
Dec  4 05:13:06 np0005545273 lvm[73089]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:13:06 np0005545273 lvm[73089]: VG ceph_vg1 finished
Dec  4 05:13:06 np0005545273 chronyd[58735]: Selected source 207.34.48.31 (pool.ntp.org)
Dec  4 05:13:07 np0005545273 python3[73115]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  4 05:13:08 np0005545273 python3[73142]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  4 05:13:09 np0005545273 python3[73168]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G#012losetup /dev/loop5 /var/lib/ceph-osd-2.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:13:09 np0005545273 kernel: loop5: detected capacity change from 0 to 41943040
Dec  4 05:13:09 np0005545273 python3[73200]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5#012vgcreate ceph_vg2 /dev/loop5#012lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:13:09 np0005545273 lvm[73203]: PV /dev/loop5 not used.
Dec  4 05:13:09 np0005545273 lvm[73205]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:13:09 np0005545273 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Dec  4 05:13:10 np0005545273 lvm[73212]:  1 logical volume(s) in volume group "ceph_vg2" now active
Dec  4 05:13:10 np0005545273 lvm[73216]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:13:10 np0005545273 lvm[73216]: VG ceph_vg2 finished
Dec  4 05:13:10 np0005545273 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Dec  4 05:13:10 np0005545273 python3[73295]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 05:13:10 np0005545273 python3[73368]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764843190.2001336-36183-43864294763365/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:13:11 np0005545273 python3[73418]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:13:11 np0005545273 systemd[1]: Reloading.
Dec  4 05:13:11 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:13:11 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:13:12 np0005545273 systemd[1]: Starting Ceph OSD losetup...
Dec  4 05:13:12 np0005545273 bash[73458]: /dev/loop5: [64513]:4327958 (/var/lib/ceph-osd-2.img)
Dec  4 05:13:12 np0005545273 systemd[1]: Finished Ceph OSD losetup.
Dec  4 05:13:12 np0005545273 lvm[73459]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:13:12 np0005545273 lvm[73459]: VG ceph_vg2 finished
Dec  4 05:13:14 np0005545273 python3[73483]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:13:16 np0005545273 python3[73578]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-tentacle'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  4 05:13:19 np0005545273 python3[73635]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  4 05:13:23 np0005545273 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  4 05:13:23 np0005545273 systemd[1]: Starting man-db-cache-update.service...
Dec  4 05:13:23 np0005545273 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  4 05:13:23 np0005545273 systemd[1]: Finished man-db-cache-update.service.
Dec  4 05:13:23 np0005545273 systemd[1]: run-r47ff6ba7ca8442d3bf258dbb2dbe7cfe.service: Deactivated successfully.
Dec  4 05:13:23 np0005545273 python3[73756]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  4 05:13:24 np0005545273 python3[73784]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:13:24 np0005545273 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  4 05:13:25 np0005545273 python3[73823]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:13:25 np0005545273 python3[73849]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:13:26 np0005545273 python3[73927]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 05:13:26 np0005545273 python3[74000]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764843205.8727083-36331-233421612422645/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:13:27 np0005545273 python3[74102]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 05:13:27 np0005545273 python3[74175]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764843207.1810696-36349-10698057980318/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:13:28 np0005545273 python3[74225]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  4 05:13:28 np0005545273 python3[74253]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  4 05:13:29 np0005545273 python3[74281]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  4 05:13:29 np0005545273 python3[74309]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:13:29 np0005545273 systemd[1]: Created slice User Slice of UID 42477.
Dec  4 05:13:29 np0005545273 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec  4 05:13:29 np0005545273 systemd-logind[798]: New session 19 of user ceph-admin.
Dec  4 05:13:29 np0005545273 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec  4 05:13:29 np0005545273 systemd[1]: Starting User Manager for UID 42477...
Dec  4 05:13:29 np0005545273 systemd[74317]: Queued start job for default target Main User Target.
Dec  4 05:13:29 np0005545273 systemd[74317]: Created slice User Application Slice.
Dec  4 05:13:29 np0005545273 systemd[74317]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  4 05:13:29 np0005545273 systemd[74317]: Started Daily Cleanup of User's Temporary Directories.
Dec  4 05:13:29 np0005545273 systemd[74317]: Reached target Paths.
Dec  4 05:13:29 np0005545273 systemd[74317]: Reached target Timers.
Dec  4 05:13:29 np0005545273 systemd[74317]: Starting D-Bus User Message Bus Socket...
Dec  4 05:13:29 np0005545273 systemd[74317]: Starting Create User's Volatile Files and Directories...
Dec  4 05:13:29 np0005545273 systemd[74317]: Finished Create User's Volatile Files and Directories.
Dec  4 05:13:29 np0005545273 systemd[74317]: Listening on D-Bus User Message Bus Socket.
Dec  4 05:13:29 np0005545273 systemd[74317]: Reached target Sockets.
Dec  4 05:13:29 np0005545273 systemd[74317]: Reached target Basic System.
Dec  4 05:13:29 np0005545273 systemd[74317]: Reached target Main User Target.
Dec  4 05:13:29 np0005545273 systemd[74317]: Startup finished in 129ms.
Dec  4 05:13:29 np0005545273 systemd[1]: Started User Manager for UID 42477.
Dec  4 05:13:29 np0005545273 systemd[1]: Started Session 19 of User ceph-admin.
Dec  4 05:13:29 np0005545273 systemd-logind[798]: Session 19 logged out. Waiting for processes to exit.
Dec  4 05:13:29 np0005545273 systemd[1]: session-19.scope: Deactivated successfully.
Dec  4 05:13:29 np0005545273 systemd-logind[798]: Removed session 19.
Dec  4 05:13:30 np0005545273 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  4 05:13:30 np0005545273 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  4 05:13:34 np0005545273 systemd[1]: var-lib-containers-storage-overlay-compat1353328872-lower\x2dmapped.mount: Deactivated successfully.
Dec  4 05:13:40 np0005545273 systemd[1]: Stopping User Manager for UID 42477...
Dec  4 05:13:40 np0005545273 systemd[74317]: Activating special unit Exit the Session...
Dec  4 05:13:40 np0005545273 systemd[74317]: Stopped target Main User Target.
Dec  4 05:13:40 np0005545273 systemd[74317]: Stopped target Basic System.
Dec  4 05:13:40 np0005545273 systemd[74317]: Stopped target Paths.
Dec  4 05:13:40 np0005545273 systemd[74317]: Stopped target Sockets.
Dec  4 05:13:40 np0005545273 systemd[74317]: Stopped target Timers.
Dec  4 05:13:40 np0005545273 systemd[74317]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec  4 05:13:40 np0005545273 systemd[74317]: Stopped Daily Cleanup of User's Temporary Directories.
Dec  4 05:13:40 np0005545273 systemd[74317]: Closed D-Bus User Message Bus Socket.
Dec  4 05:13:40 np0005545273 systemd[74317]: Stopped Create User's Volatile Files and Directories.
Dec  4 05:13:40 np0005545273 systemd[74317]: Removed slice User Application Slice.
Dec  4 05:13:40 np0005545273 systemd[74317]: Reached target Shutdown.
Dec  4 05:13:40 np0005545273 systemd[74317]: Finished Exit the Session.
Dec  4 05:13:40 np0005545273 systemd[74317]: Reached target Exit the Session.
Dec  4 05:13:40 np0005545273 systemd[1]: user@42477.service: Deactivated successfully.
Dec  4 05:13:40 np0005545273 systemd[1]: Stopped User Manager for UID 42477.
Dec  4 05:13:40 np0005545273 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec  4 05:13:40 np0005545273 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec  4 05:13:40 np0005545273 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec  4 05:13:40 np0005545273 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec  4 05:13:40 np0005545273 systemd[1]: Removed slice User Slice of UID 42477.
Dec  4 05:13:59 np0005545273 podman[74410]: 2025-12-04 10:13:59.917654865 +0000 UTC m=+29.666856772 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:13:59 np0005545273 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  4 05:13:59 np0005545273 podman[74479]: 2025-12-04 10:13:59.985148747 +0000 UTC m=+0.037565295 container create 96f0a064e494e6d0b9dcd6fb4e4768e7d8ff03ebe736564172bf943663d283aa (image=quay.io/ceph/ceph:v20, name=hungry_lovelace, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:14:00 np0005545273 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec  4 05:14:00 np0005545273 systemd[1]: Started libpod-conmon-96f0a064e494e6d0b9dcd6fb4e4768e7d8ff03ebe736564172bf943663d283aa.scope.
Dec  4 05:14:00 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:00 np0005545273 podman[74479]: 2025-12-04 10:13:59.966555925 +0000 UTC m=+0.018972393 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:00 np0005545273 podman[74479]: 2025-12-04 10:14:00.068955576 +0000 UTC m=+0.121372044 container init 96f0a064e494e6d0b9dcd6fb4e4768e7d8ff03ebe736564172bf943663d283aa (image=quay.io/ceph/ceph:v20, name=hungry_lovelace, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  4 05:14:00 np0005545273 podman[74479]: 2025-12-04 10:14:00.074852949 +0000 UTC m=+0.127269387 container start 96f0a064e494e6d0b9dcd6fb4e4768e7d8ff03ebe736564172bf943663d283aa (image=quay.io/ceph/ceph:v20, name=hungry_lovelace, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:14:00 np0005545273 podman[74479]: 2025-12-04 10:14:00.079964594 +0000 UTC m=+0.132381042 container attach 96f0a064e494e6d0b9dcd6fb4e4768e7d8ff03ebe736564172bf943663d283aa (image=quay.io/ceph/ceph:v20, name=hungry_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:14:00 np0005545273 hungry_lovelace[74495]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Dec  4 05:14:00 np0005545273 systemd[1]: libpod-96f0a064e494e6d0b9dcd6fb4e4768e7d8ff03ebe736564172bf943663d283aa.scope: Deactivated successfully.
Dec  4 05:14:00 np0005545273 podman[74479]: 2025-12-04 10:14:00.1805037 +0000 UTC m=+0.232920188 container died 96f0a064e494e6d0b9dcd6fb4e4768e7d8ff03ebe736564172bf943663d283aa (image=quay.io/ceph/ceph:v20, name=hungry_lovelace, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec  4 05:14:00 np0005545273 systemd[1]: var-lib-containers-storage-overlay-a183c112e72f6f87f851015dae2b418fb21a32da2d1fff53ef4706adaf788c5c-merged.mount: Deactivated successfully.
Dec  4 05:14:00 np0005545273 podman[74479]: 2025-12-04 10:14:00.221814814 +0000 UTC m=+0.274231262 container remove 96f0a064e494e6d0b9dcd6fb4e4768e7d8ff03ebe736564172bf943663d283aa (image=quay.io/ceph/ceph:v20, name=hungry_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:14:00 np0005545273 systemd[1]: libpod-conmon-96f0a064e494e6d0b9dcd6fb4e4768e7d8ff03ebe736564172bf943663d283aa.scope: Deactivated successfully.
Dec  4 05:14:00 np0005545273 podman[74511]: 2025-12-04 10:14:00.284161261 +0000 UTC m=+0.043184222 container create 3deccfb2b783c2a6e3d1ad4439ecca81638256deab42ee9763e809e9c0dc77d5 (image=quay.io/ceph/ceph:v20, name=frosty_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec  4 05:14:00 np0005545273 systemd[1]: Started libpod-conmon-3deccfb2b783c2a6e3d1ad4439ecca81638256deab42ee9763e809e9c0dc77d5.scope.
Dec  4 05:14:00 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:00 np0005545273 podman[74511]: 2025-12-04 10:14:00.343852792 +0000 UTC m=+0.102875733 container init 3deccfb2b783c2a6e3d1ad4439ecca81638256deab42ee9763e809e9c0dc77d5 (image=quay.io/ceph/ceph:v20, name=frosty_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:14:00 np0005545273 podman[74511]: 2025-12-04 10:14:00.349158362 +0000 UTC m=+0.108181273 container start 3deccfb2b783c2a6e3d1ad4439ecca81638256deab42ee9763e809e9c0dc77d5 (image=quay.io/ceph/ceph:v20, name=frosty_carson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:14:00 np0005545273 frosty_carson[74528]: 167 167
Dec  4 05:14:00 np0005545273 systemd[1]: libpod-3deccfb2b783c2a6e3d1ad4439ecca81638256deab42ee9763e809e9c0dc77d5.scope: Deactivated successfully.
Dec  4 05:14:00 np0005545273 podman[74511]: 2025-12-04 10:14:00.352996805 +0000 UTC m=+0.112019736 container attach 3deccfb2b783c2a6e3d1ad4439ecca81638256deab42ee9763e809e9c0dc77d5 (image=quay.io/ceph/ceph:v20, name=frosty_carson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  4 05:14:00 np0005545273 podman[74511]: 2025-12-04 10:14:00.35360258 +0000 UTC m=+0.112625501 container died 3deccfb2b783c2a6e3d1ad4439ecca81638256deab42ee9763e809e9c0dc77d5 (image=quay.io/ceph/ceph:v20, name=frosty_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:14:00 np0005545273 podman[74511]: 2025-12-04 10:14:00.26601891 +0000 UTC m=+0.025041851 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:00 np0005545273 podman[74511]: 2025-12-04 10:14:00.382954614 +0000 UTC m=+0.141977545 container remove 3deccfb2b783c2a6e3d1ad4439ecca81638256deab42ee9763e809e9c0dc77d5 (image=quay.io/ceph/ceph:v20, name=frosty_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  4 05:14:00 np0005545273 systemd[1]: libpod-conmon-3deccfb2b783c2a6e3d1ad4439ecca81638256deab42ee9763e809e9c0dc77d5.scope: Deactivated successfully.
Dec  4 05:14:00 np0005545273 podman[74544]: 2025-12-04 10:14:00.445432614 +0000 UTC m=+0.040642610 container create f3cc10569caefa5a68f5ef9ac4bb8c26864068b8fb45a451f8e3b9ce207a59e1 (image=quay.io/ceph/ceph:v20, name=dreamy_leavitt, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:14:00 np0005545273 systemd[1]: Started libpod-conmon-f3cc10569caefa5a68f5ef9ac4bb8c26864068b8fb45a451f8e3b9ce207a59e1.scope.
Dec  4 05:14:00 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:00 np0005545273 podman[74544]: 2025-12-04 10:14:00.502971803 +0000 UTC m=+0.098181819 container init f3cc10569caefa5a68f5ef9ac4bb8c26864068b8fb45a451f8e3b9ce207a59e1 (image=quay.io/ceph/ceph:v20, name=dreamy_leavitt, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec  4 05:14:00 np0005545273 podman[74544]: 2025-12-04 10:14:00.50734018 +0000 UTC m=+0.102550176 container start f3cc10569caefa5a68f5ef9ac4bb8c26864068b8fb45a451f8e3b9ce207a59e1 (image=quay.io/ceph/ceph:v20, name=dreamy_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec  4 05:14:00 np0005545273 podman[74544]: 2025-12-04 10:14:00.510385473 +0000 UTC m=+0.105595519 container attach f3cc10569caefa5a68f5ef9ac4bb8c26864068b8fb45a451f8e3b9ce207a59e1 (image=quay.io/ceph/ceph:v20, name=dreamy_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:14:00 np0005545273 dreamy_leavitt[74560]: AQDoXjFpudVHHxAAseK2Wow9iO9o3+Ir2a2qrw==
Dec  4 05:14:00 np0005545273 podman[74544]: 2025-12-04 10:14:00.429519086 +0000 UTC m=+0.024729092 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:00 np0005545273 systemd[1]: libpod-f3cc10569caefa5a68f5ef9ac4bb8c26864068b8fb45a451f8e3b9ce207a59e1.scope: Deactivated successfully.
Dec  4 05:14:00 np0005545273 podman[74544]: 2025-12-04 10:14:00.528072224 +0000 UTC m=+0.123282230 container died f3cc10569caefa5a68f5ef9ac4bb8c26864068b8fb45a451f8e3b9ce207a59e1 (image=quay.io/ceph/ceph:v20, name=dreamy_leavitt, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec  4 05:14:00 np0005545273 podman[74544]: 2025-12-04 10:14:00.56817168 +0000 UTC m=+0.163381676 container remove f3cc10569caefa5a68f5ef9ac4bb8c26864068b8fb45a451f8e3b9ce207a59e1 (image=quay.io/ceph/ceph:v20, name=dreamy_leavitt, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:14:00 np0005545273 systemd[1]: libpod-conmon-f3cc10569caefa5a68f5ef9ac4bb8c26864068b8fb45a451f8e3b9ce207a59e1.scope: Deactivated successfully.
Dec  4 05:14:00 np0005545273 podman[74579]: 2025-12-04 10:14:00.641730419 +0000 UTC m=+0.056138667 container create 9e5e88c53412cea922cd9e1dc686e570acef4e3d39cb774f2348b8d4ef873932 (image=quay.io/ceph/ceph:v20, name=priceless_easley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:14:00 np0005545273 systemd[1]: Started libpod-conmon-9e5e88c53412cea922cd9e1dc686e570acef4e3d39cb774f2348b8d4ef873932.scope.
Dec  4 05:14:00 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:00 np0005545273 podman[74579]: 2025-12-04 10:14:00.697454764 +0000 UTC m=+0.111863082 container init 9e5e88c53412cea922cd9e1dc686e570acef4e3d39cb774f2348b8d4ef873932 (image=quay.io/ceph/ceph:v20, name=priceless_easley, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:14:00 np0005545273 podman[74579]: 2025-12-04 10:14:00.610793746 +0000 UTC m=+0.025202084 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:00 np0005545273 podman[74579]: 2025-12-04 10:14:00.703600383 +0000 UTC m=+0.118008661 container start 9e5e88c53412cea922cd9e1dc686e570acef4e3d39cb774f2348b8d4ef873932 (image=quay.io/ceph/ceph:v20, name=priceless_easley, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:14:00 np0005545273 podman[74579]: 2025-12-04 10:14:00.708301938 +0000 UTC m=+0.122710226 container attach 9e5e88c53412cea922cd9e1dc686e570acef4e3d39cb774f2348b8d4ef873932 (image=quay.io/ceph/ceph:v20, name=priceless_easley, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:14:00 np0005545273 priceless_easley[74595]: AQDoXjFpzJYdLBAA9cSiudMhfVlhS9w0KRsFdw==
Dec  4 05:14:00 np0005545273 systemd[1]: libpod-9e5e88c53412cea922cd9e1dc686e570acef4e3d39cb774f2348b8d4ef873932.scope: Deactivated successfully.
Dec  4 05:14:00 np0005545273 podman[74579]: 2025-12-04 10:14:00.746545838 +0000 UTC m=+0.160954156 container died 9e5e88c53412cea922cd9e1dc686e570acef4e3d39cb774f2348b8d4ef873932 (image=quay.io/ceph/ceph:v20, name=priceless_easley, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec  4 05:14:00 np0005545273 podman[74579]: 2025-12-04 10:14:00.789817641 +0000 UTC m=+0.204225929 container remove 9e5e88c53412cea922cd9e1dc686e570acef4e3d39cb774f2348b8d4ef873932 (image=quay.io/ceph/ceph:v20, name=priceless_easley, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec  4 05:14:00 np0005545273 systemd[1]: libpod-conmon-9e5e88c53412cea922cd9e1dc686e570acef4e3d39cb774f2348b8d4ef873932.scope: Deactivated successfully.
Dec  4 05:14:00 np0005545273 podman[74615]: 2025-12-04 10:14:00.871367625 +0000 UTC m=+0.055954332 container create 3795ca4fe5b35eecae3420bc0f7b70027c5636624184229736a9345d390d5ac6 (image=quay.io/ceph/ceph:v20, name=jolly_burnell, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:14:00 np0005545273 systemd[1]: Started libpod-conmon-3795ca4fe5b35eecae3420bc0f7b70027c5636624184229736a9345d390d5ac6.scope.
Dec  4 05:14:00 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:00 np0005545273 podman[74615]: 2025-12-04 10:14:00.845584028 +0000 UTC m=+0.030170795 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:00 np0005545273 podman[74615]: 2025-12-04 10:14:00.949808993 +0000 UTC m=+0.134395740 container init 3795ca4fe5b35eecae3420bc0f7b70027c5636624184229736a9345d390d5ac6 (image=quay.io/ceph/ceph:v20, name=jolly_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec  4 05:14:00 np0005545273 podman[74615]: 2025-12-04 10:14:00.958309149 +0000 UTC m=+0.142895866 container start 3795ca4fe5b35eecae3420bc0f7b70027c5636624184229736a9345d390d5ac6 (image=quay.io/ceph/ceph:v20, name=jolly_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec  4 05:14:00 np0005545273 podman[74615]: 2025-12-04 10:14:00.962807959 +0000 UTC m=+0.147394726 container attach 3795ca4fe5b35eecae3420bc0f7b70027c5636624184229736a9345d390d5ac6 (image=quay.io/ceph/ceph:v20, name=jolly_burnell, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:14:00 np0005545273 jolly_burnell[74632]: AQDoXjFp1z2GOhAA+dIeJ0tGhdDx9kUod6sTpQ==
Dec  4 05:14:00 np0005545273 systemd[1]: libpod-3795ca4fe5b35eecae3420bc0f7b70027c5636624184229736a9345d390d5ac6.scope: Deactivated successfully.
Dec  4 05:14:00 np0005545273 podman[74615]: 2025-12-04 10:14:00.987981991 +0000 UTC m=+0.172568698 container died 3795ca4fe5b35eecae3420bc0f7b70027c5636624184229736a9345d390d5ac6 (image=quay.io/ceph/ceph:v20, name=jolly_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec  4 05:14:01 np0005545273 systemd[1]: var-lib-containers-storage-overlay-205bef148c779b469b9afd8665e028752dde0f9e4d61ca282a7f3efe0406dec0-merged.mount: Deactivated successfully.
Dec  4 05:14:01 np0005545273 podman[74615]: 2025-12-04 10:14:01.037142907 +0000 UTC m=+0.221729614 container remove 3795ca4fe5b35eecae3420bc0f7b70027c5636624184229736a9345d390d5ac6 (image=quay.io/ceph/ceph:v20, name=jolly_burnell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Dec  4 05:14:01 np0005545273 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  4 05:14:01 np0005545273 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  4 05:14:01 np0005545273 systemd[1]: libpod-conmon-3795ca4fe5b35eecae3420bc0f7b70027c5636624184229736a9345d390d5ac6.scope: Deactivated successfully.
Dec  4 05:14:01 np0005545273 podman[74650]: 2025-12-04 10:14:01.135521871 +0000 UTC m=+0.064635613 container create 2db18df8898193d9ebbafbe71d5fb8cb3a974ed46ab0e06f7bb859645bf6b288 (image=quay.io/ceph/ceph:v20, name=confident_fermi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Dec  4 05:14:01 np0005545273 systemd[1]: Started libpod-conmon-2db18df8898193d9ebbafbe71d5fb8cb3a974ed46ab0e06f7bb859645bf6b288.scope.
Dec  4 05:14:01 np0005545273 podman[74650]: 2025-12-04 10:14:01.108338829 +0000 UTC m=+0.037452611 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:01 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:01 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3edbbfb0286447243a590856cd11b35f9795603c79d38db212ade8df19b4f486/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:01 np0005545273 podman[74650]: 2025-12-04 10:14:01.238638748 +0000 UTC m=+0.167752550 container init 2db18df8898193d9ebbafbe71d5fb8cb3a974ed46ab0e06f7bb859645bf6b288 (image=quay.io/ceph/ceph:v20, name=confident_fermi, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  4 05:14:01 np0005545273 podman[74650]: 2025-12-04 10:14:01.247459864 +0000 UTC m=+0.176573616 container start 2db18df8898193d9ebbafbe71d5fb8cb3a974ed46ab0e06f7bb859645bf6b288 (image=quay.io/ceph/ceph:v20, name=confident_fermi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec  4 05:14:01 np0005545273 podman[74650]: 2025-12-04 10:14:01.251320997 +0000 UTC m=+0.180434789 container attach 2db18df8898193d9ebbafbe71d5fb8cb3a974ed46ab0e06f7bb859645bf6b288 (image=quay.io/ceph/ceph:v20, name=confident_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  4 05:14:01 np0005545273 confident_fermi[74666]: /usr/bin/monmaptool: monmap file /tmp/monmap
Dec  4 05:14:01 np0005545273 confident_fermi[74666]: setting min_mon_release = tentacle
Dec  4 05:14:01 np0005545273 confident_fermi[74666]: /usr/bin/monmaptool: set fsid to f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec  4 05:14:01 np0005545273 confident_fermi[74666]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Dec  4 05:14:01 np0005545273 systemd[1]: libpod-2db18df8898193d9ebbafbe71d5fb8cb3a974ed46ab0e06f7bb859645bf6b288.scope: Deactivated successfully.
Dec  4 05:14:01 np0005545273 podman[74650]: 2025-12-04 10:14:01.301042727 +0000 UTC m=+0.230156469 container died 2db18df8898193d9ebbafbe71d5fb8cb3a974ed46ab0e06f7bb859645bf6b288 (image=quay.io/ceph/ceph:v20, name=confident_fermi, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec  4 05:14:01 np0005545273 podman[74650]: 2025-12-04 10:14:01.349324601 +0000 UTC m=+0.278438323 container remove 2db18df8898193d9ebbafbe71d5fb8cb3a974ed46ab0e06f7bb859645bf6b288 (image=quay.io/ceph/ceph:v20, name=confident_fermi, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:14:01 np0005545273 systemd[1]: libpod-conmon-2db18df8898193d9ebbafbe71d5fb8cb3a974ed46ab0e06f7bb859645bf6b288.scope: Deactivated successfully.
Dec  4 05:14:01 np0005545273 podman[74686]: 2025-12-04 10:14:01.428958238 +0000 UTC m=+0.052084558 container create 73ae8e988b28d85343c6e1e31580ba41c9f7895b7d5fbef3ad93fadc1772f88f (image=quay.io/ceph/ceph:v20, name=hopeful_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:14:01 np0005545273 systemd[1]: Started libpod-conmon-73ae8e988b28d85343c6e1e31580ba41c9f7895b7d5fbef3ad93fadc1772f88f.scope.
Dec  4 05:14:01 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:01 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a90ccba778261ec811496283ab7e90c6244fe93138067f0c1f084607a81d7d4/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:01 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a90ccba778261ec811496283ab7e90c6244fe93138067f0c1f084607a81d7d4/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:01 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a90ccba778261ec811496283ab7e90c6244fe93138067f0c1f084607a81d7d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:01 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a90ccba778261ec811496283ab7e90c6244fe93138067f0c1f084607a81d7d4/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:01 np0005545273 podman[74686]: 2025-12-04 10:14:01.404504634 +0000 UTC m=+0.027630994 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:01 np0005545273 podman[74686]: 2025-12-04 10:14:01.503148133 +0000 UTC m=+0.126274473 container init 73ae8e988b28d85343c6e1e31580ba41c9f7895b7d5fbef3ad93fadc1772f88f (image=quay.io/ceph/ceph:v20, name=hopeful_ride, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  4 05:14:01 np0005545273 podman[74686]: 2025-12-04 10:14:01.507961671 +0000 UTC m=+0.131088001 container start 73ae8e988b28d85343c6e1e31580ba41c9f7895b7d5fbef3ad93fadc1772f88f (image=quay.io/ceph/ceph:v20, name=hopeful_ride, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec  4 05:14:01 np0005545273 podman[74686]: 2025-12-04 10:14:01.512494411 +0000 UTC m=+0.135620771 container attach 73ae8e988b28d85343c6e1e31580ba41c9f7895b7d5fbef3ad93fadc1772f88f (image=quay.io/ceph/ceph:v20, name=hopeful_ride, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:14:01 np0005545273 systemd[1]: libpod-73ae8e988b28d85343c6e1e31580ba41c9f7895b7d5fbef3ad93fadc1772f88f.scope: Deactivated successfully.
Dec  4 05:14:01 np0005545273 podman[74686]: 2025-12-04 10:14:01.609234294 +0000 UTC m=+0.232360624 container died 73ae8e988b28d85343c6e1e31580ba41c9f7895b7d5fbef3ad93fadc1772f88f (image=quay.io/ceph/ceph:v20, name=hopeful_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:14:01 np0005545273 podman[74686]: 2025-12-04 10:14:01.645196489 +0000 UTC m=+0.268322829 container remove 73ae8e988b28d85343c6e1e31580ba41c9f7895b7d5fbef3ad93fadc1772f88f (image=quay.io/ceph/ceph:v20, name=hopeful_ride, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  4 05:14:01 np0005545273 systemd[1]: libpod-conmon-73ae8e988b28d85343c6e1e31580ba41c9f7895b7d5fbef3ad93fadc1772f88f.scope: Deactivated successfully.
Dec  4 05:14:01 np0005545273 systemd[1]: Reloading.
Dec  4 05:14:01 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:14:01 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:14:01 np0005545273 systemd[1]: Reloading.
Dec  4 05:14:02 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:14:02 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:14:02 np0005545273 systemd[1]: Reached target All Ceph clusters and services.
Dec  4 05:14:02 np0005545273 systemd[1]: Reloading.
Dec  4 05:14:02 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:14:02 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:14:02 np0005545273 systemd[1]: Reached target Ceph cluster f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d.
Dec  4 05:14:02 np0005545273 systemd[1]: Reloading.
Dec  4 05:14:02 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:14:02 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:14:02 np0005545273 systemd[1]: Reloading.
Dec  4 05:14:02 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:14:02 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:14:03 np0005545273 systemd[1]: Created slice Slice /system/ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d.
Dec  4 05:14:03 np0005545273 systemd[1]: Reached target System Time Set.
Dec  4 05:14:03 np0005545273 systemd[1]: Reached target System Time Synchronized.
Dec  4 05:14:03 np0005545273 systemd[1]: Starting Ceph mon.compute-0 for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d...
Dec  4 05:14:03 np0005545273 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  4 05:14:03 np0005545273 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  4 05:14:03 np0005545273 podman[74983]: 2025-12-04 10:14:03.357208104 +0000 UTC m=+0.057066609 container create d32677119db7471630ed10d34a82476d263c78d0396f8f37dbe667237f467993 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec  4 05:14:03 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a186da268ee28948e4ecbb6a0d1a8bacd43222e0fe25b37c3d0105318d31b593/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:03 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a186da268ee28948e4ecbb6a0d1a8bacd43222e0fe25b37c3d0105318d31b593/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:03 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a186da268ee28948e4ecbb6a0d1a8bacd43222e0fe25b37c3d0105318d31b593/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:03 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a186da268ee28948e4ecbb6a0d1a8bacd43222e0fe25b37c3d0105318d31b593/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:03 np0005545273 podman[74983]: 2025-12-04 10:14:03.419500139 +0000 UTC m=+0.119358654 container init d32677119db7471630ed10d34a82476d263c78d0396f8f37dbe667237f467993 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:14:03 np0005545273 podman[74983]: 2025-12-04 10:14:03.332245147 +0000 UTC m=+0.032103732 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:03 np0005545273 podman[74983]: 2025-12-04 10:14:03.428682163 +0000 UTC m=+0.128540658 container start d32677119db7471630ed10d34a82476d263c78d0396f8f37dbe667237f467993 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  4 05:14:03 np0005545273 bash[74983]: d32677119db7471630ed10d34a82476d263c78d0396f8f37dbe667237f467993
Dec  4 05:14:03 np0005545273 systemd[1]: Started Ceph mon.compute-0 for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d.
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: set uid:gid to 167:167 (ceph:ceph)
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: pidfile_write: ignore empty --pid-file
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: load: jerasure load: lrc 
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: RocksDB version: 7.9.2
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: Git sha 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: Compile date 2025-10-30 15:42:43
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: DB SUMMARY
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: DB Session ID:  7WT4DFD6J7L4496MS03O
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: CURRENT file:  CURRENT
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: IDENTITY file:  IDENTITY
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                         Options.error_if_exists: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                       Options.create_if_missing: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                         Options.paranoid_checks: 1
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                                     Options.env: 0x55a31fe52440
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                                      Options.fs: PosixFileSystem
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                                Options.info_log: 0x55a320db93e0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                Options.max_file_opening_threads: 16
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                              Options.statistics: (nil)
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                               Options.use_fsync: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                       Options.max_log_file_size: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                         Options.allow_fallocate: 1
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                        Options.use_direct_reads: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:          Options.create_missing_column_families: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                              Options.db_log_dir: 
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                                 Options.wal_dir: 
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                   Options.advise_random_on_open: 1
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                    Options.write_buffer_manager: 0x55a320d38140
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                            Options.rate_limiter: (nil)
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                  Options.unordered_write: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                               Options.row_cache: None
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                              Options.wal_filter: None
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:             Options.allow_ingest_behind: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:             Options.two_write_queues: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:             Options.manual_wal_flush: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:             Options.wal_compression: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:             Options.atomic_flush: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                 Options.log_readahead_size: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:             Options.allow_data_in_errors: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:             Options.db_host_id: __hostname__
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:             Options.max_background_jobs: 2
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:             Options.max_background_compactions: -1
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:             Options.max_subcompactions: 1
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:             Options.max_total_wal_size: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                          Options.max_open_files: -1
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                          Options.bytes_per_sync: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:       Options.compaction_readahead_size: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                  Options.max_background_flushes: -1
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: Compression algorithms supported:
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: #011kZSTD supported: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: #011kXpressCompression supported: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: #011kBZip2Compression supported: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: #011kLZ4Compression supported: 1
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: #011kZlibCompression supported: 1
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: #011kSnappyCompression supported: 1
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:           Options.merge_operator: 
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:        Options.compaction_filter: None
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a320d44600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a320d298d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:        Options.write_buffer_size: 33554432
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:  Options.max_write_buffer_number: 2
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:          Options.compression: NoCompression
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:             Options.num_levels: 7
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1bea4932-39ce-4c6c-8b9b-253595ae5108
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843243486470, "job": 1, "event": "recovery_started", "wal_files": [4]}
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843243489570, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "7WT4DFD6J7L4496MS03O", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843243489694, "job": 1, "event": "recovery_finished"}
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55a320d56e00
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: DB pointer 0x55a320ea2000
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.12 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.12 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a320d298d0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: mon.compute-0@-1(???) e0 preinit fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: mon.compute-0@0(probing) e0 win_standalone_election
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Dec  4 05:14:03 np0005545273 podman[75004]: 2025-12-04 10:14:03.523212072 +0000 UTC m=+0.051948694 container create 4ebc742aac73794b59cc291d5ee87157a672fcf58ef2073347832f2c79650d8d (image=quay.io/ceph/ceph:v20, name=modest_payne, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: mon.compute-0@0(probing) e1 win_standalone_election
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: paxos.0).electionLogic(2) init, last seen epoch 2
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: log_channel(cluster) log [DBG] : monmap epoch 1
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: log_channel(cluster) log [DBG] : fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: log_channel(cluster) log [DBG] : last_changed 2025-12-04T10:14:01.294217+0000
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: log_channel(cluster) log [DBG] : created 2025-12-04T10:14:01.294217+0000
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,ceph_version_when_created=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v20,cpu=AMD EPYC-Rome Processor,created_at=2025-12-04T10:14:01.553789Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,os=Linux}
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout,17=tentacle ondisk layout}
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: mon.compute-0@0(leader).mds e1 new map
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: mon.compute-0@0(leader).mds e1 print_map#012e1#012btime 2025-12-04T10:14:03:532003+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: log_channel(cluster) log [DBG] : fsmap 
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: mkfs f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  4 05:14:03 np0005545273 systemd[1]: Started libpod-conmon-4ebc742aac73794b59cc291d5ee87157a672fcf58ef2073347832f2c79650d8d.scope.
Dec  4 05:14:03 np0005545273 podman[75004]: 2025-12-04 10:14:03.503226646 +0000 UTC m=+0.031963288 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:03 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:03 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/848b28f9c824aed493a13559723d96d6cbe9183ff82192101644a5799620969f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:03 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/848b28f9c824aed493a13559723d96d6cbe9183ff82192101644a5799620969f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:03 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/848b28f9c824aed493a13559723d96d6cbe9183ff82192101644a5799620969f/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:03 np0005545273 podman[75004]: 2025-12-04 10:14:03.641580462 +0000 UTC m=+0.170317164 container init 4ebc742aac73794b59cc291d5ee87157a672fcf58ef2073347832f2c79650d8d (image=quay.io/ceph/ceph:v20, name=modest_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec  4 05:14:03 np0005545273 podman[75004]: 2025-12-04 10:14:03.652416445 +0000 UTC m=+0.181153097 container start 4ebc742aac73794b59cc291d5ee87157a672fcf58ef2073347832f2c79650d8d (image=quay.io/ceph/ceph:v20, name=modest_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec  4 05:14:03 np0005545273 podman[75004]: 2025-12-04 10:14:03.656129856 +0000 UTC m=+0.184866498 container attach 4ebc742aac73794b59cc291d5ee87157a672fcf58ef2073347832f2c79650d8d (image=quay.io/ceph/ceph:v20, name=modest_payne, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Dec  4 05:14:03 np0005545273 ceph-mon[75003]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1692268427' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec  4 05:14:03 np0005545273 modest_payne[75058]:  cluster:
Dec  4 05:14:03 np0005545273 modest_payne[75058]:    id:     f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec  4 05:14:03 np0005545273 modest_payne[75058]:    health: HEALTH_OK
Dec  4 05:14:03 np0005545273 modest_payne[75058]: 
Dec  4 05:14:03 np0005545273 modest_payne[75058]:  services:
Dec  4 05:14:03 np0005545273 modest_payne[75058]:    mon: 1 daemons, quorum compute-0 (age 0.311425s) [leader: compute-0]
Dec  4 05:14:03 np0005545273 modest_payne[75058]:    mgr: no daemons active
Dec  4 05:14:03 np0005545273 modest_payne[75058]:    osd: 0 osds: 0 up, 0 in
Dec  4 05:14:03 np0005545273 modest_payne[75058]: 
Dec  4 05:14:03 np0005545273 modest_payne[75058]:  data:
Dec  4 05:14:03 np0005545273 modest_payne[75058]:    pools:   0 pools, 0 pgs
Dec  4 05:14:03 np0005545273 modest_payne[75058]:    objects: 0 objects, 0 B
Dec  4 05:14:03 np0005545273 modest_payne[75058]:    usage:   0 B used, 0 B / 0 B avail
Dec  4 05:14:03 np0005545273 modest_payne[75058]:    pgs:     
Dec  4 05:14:03 np0005545273 modest_payne[75058]: 
Dec  4 05:14:03 np0005545273 systemd[1]: libpod-4ebc742aac73794b59cc291d5ee87157a672fcf58ef2073347832f2c79650d8d.scope: Deactivated successfully.
Dec  4 05:14:03 np0005545273 podman[75004]: 2025-12-04 10:14:03.858347154 +0000 UTC m=+0.387083816 container died 4ebc742aac73794b59cc291d5ee87157a672fcf58ef2073347832f2c79650d8d (image=quay.io/ceph/ceph:v20, name=modest_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec  4 05:14:03 np0005545273 podman[75004]: 2025-12-04 10:14:03.896226356 +0000 UTC m=+0.424962968 container remove 4ebc742aac73794b59cc291d5ee87157a672fcf58ef2073347832f2c79650d8d (image=quay.io/ceph/ceph:v20, name=modest_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:14:03 np0005545273 systemd[1]: libpod-conmon-4ebc742aac73794b59cc291d5ee87157a672fcf58ef2073347832f2c79650d8d.scope: Deactivated successfully.
Dec  4 05:14:03 np0005545273 podman[75095]: 2025-12-04 10:14:03.966091655 +0000 UTC m=+0.046597724 container create 7ee62c673477522dcda616ddb61a0f8ec3a1b8c2bd9177684970b4e74d413f07 (image=quay.io/ceph/ceph:v20, name=silly_lewin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:14:04 np0005545273 systemd[1]: Started libpod-conmon-7ee62c673477522dcda616ddb61a0f8ec3a1b8c2bd9177684970b4e74d413f07.scope.
Dec  4 05:14:04 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:04 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37d4a989507d8dc51eb65610203af352c2467738937a8065bbf9335ad37b6bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:04 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37d4a989507d8dc51eb65610203af352c2467738937a8065bbf9335ad37b6bd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:04 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37d4a989507d8dc51eb65610203af352c2467738937a8065bbf9335ad37b6bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:04 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37d4a989507d8dc51eb65610203af352c2467738937a8065bbf9335ad37b6bd/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:04 np0005545273 podman[75095]: 2025-12-04 10:14:03.941641891 +0000 UTC m=+0.022147950 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:04 np0005545273 podman[75095]: 2025-12-04 10:14:04.05015015 +0000 UTC m=+0.130656229 container init 7ee62c673477522dcda616ddb61a0f8ec3a1b8c2bd9177684970b4e74d413f07 (image=quay.io/ceph/ceph:v20, name=silly_lewin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec  4 05:14:04 np0005545273 podman[75095]: 2025-12-04 10:14:04.062208644 +0000 UTC m=+0.142714683 container start 7ee62c673477522dcda616ddb61a0f8ec3a1b8c2bd9177684970b4e74d413f07 (image=quay.io/ceph/ceph:v20, name=silly_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec  4 05:14:04 np0005545273 podman[75095]: 2025-12-04 10:14:04.066434366 +0000 UTC m=+0.146940435 container attach 7ee62c673477522dcda616ddb61a0f8ec3a1b8c2bd9177684970b4e74d413f07 (image=quay.io/ceph/ceph:v20, name=silly_lewin, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:14:04 np0005545273 ceph-mon[75003]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec  4 05:14:04 np0005545273 ceph-mon[75003]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1561490116' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec  4 05:14:04 np0005545273 ceph-mon[75003]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1561490116' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  4 05:14:04 np0005545273 silly_lewin[75112]: 
Dec  4 05:14:04 np0005545273 silly_lewin[75112]: [global]
Dec  4 05:14:04 np0005545273 silly_lewin[75112]: #011fsid = f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec  4 05:14:04 np0005545273 silly_lewin[75112]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec  4 05:14:04 np0005545273 silly_lewin[75112]: #011osd_crush_chooseleaf_type = 0
Dec  4 05:14:04 np0005545273 systemd[1]: libpod-7ee62c673477522dcda616ddb61a0f8ec3a1b8c2bd9177684970b4e74d413f07.scope: Deactivated successfully.
Dec  4 05:14:04 np0005545273 podman[75095]: 2025-12-04 10:14:04.274762744 +0000 UTC m=+0.355268793 container died 7ee62c673477522dcda616ddb61a0f8ec3a1b8c2bd9177684970b4e74d413f07 (image=quay.io/ceph/ceph:v20, name=silly_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec  4 05:14:04 np0005545273 systemd[1]: var-lib-containers-storage-overlay-a37d4a989507d8dc51eb65610203af352c2467738937a8065bbf9335ad37b6bd-merged.mount: Deactivated successfully.
Dec  4 05:14:04 np0005545273 podman[75095]: 2025-12-04 10:14:04.31075138 +0000 UTC m=+0.391257419 container remove 7ee62c673477522dcda616ddb61a0f8ec3a1b8c2bd9177684970b4e74d413f07 (image=quay.io/ceph/ceph:v20, name=silly_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec  4 05:14:04 np0005545273 systemd[1]: libpod-conmon-7ee62c673477522dcda616ddb61a0f8ec3a1b8c2bd9177684970b4e74d413f07.scope: Deactivated successfully.
Dec  4 05:14:04 np0005545273 podman[75149]: 2025-12-04 10:14:04.366954837 +0000 UTC m=+0.038284073 container create 2ecb3831394af0b24f6c3da0d136b83d8b754b07298dba6caed99bcd87d3fd23 (image=quay.io/ceph/ceph:v20, name=modest_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  4 05:14:04 np0005545273 systemd[1]: Started libpod-conmon-2ecb3831394af0b24f6c3da0d136b83d8b754b07298dba6caed99bcd87d3fd23.scope.
Dec  4 05:14:04 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:04 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e5fd1e4e6dfa9123edd46814333058ddf64b3621f930ae824c7115ad9c0cd5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:04 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e5fd1e4e6dfa9123edd46814333058ddf64b3621f930ae824c7115ad9c0cd5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:04 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e5fd1e4e6dfa9123edd46814333058ddf64b3621f930ae824c7115ad9c0cd5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:04 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e5fd1e4e6dfa9123edd46814333058ddf64b3621f930ae824c7115ad9c0cd5/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:04 np0005545273 podman[75149]: 2025-12-04 10:14:04.350049366 +0000 UTC m=+0.021378662 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:04 np0005545273 podman[75149]: 2025-12-04 10:14:04.454407974 +0000 UTC m=+0.125737250 container init 2ecb3831394af0b24f6c3da0d136b83d8b754b07298dba6caed99bcd87d3fd23 (image=quay.io/ceph/ceph:v20, name=modest_yalow, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030)
Dec  4 05:14:04 np0005545273 podman[75149]: 2025-12-04 10:14:04.461873935 +0000 UTC m=+0.133203191 container start 2ecb3831394af0b24f6c3da0d136b83d8b754b07298dba6caed99bcd87d3fd23 (image=quay.io/ceph/ceph:v20, name=modest_yalow, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec  4 05:14:04 np0005545273 podman[75149]: 2025-12-04 10:14:04.465490264 +0000 UTC m=+0.136819540 container attach 2ecb3831394af0b24f6c3da0d136b83d8b754b07298dba6caed99bcd87d3fd23 (image=quay.io/ceph/ceph:v20, name=modest_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:14:04 np0005545273 ceph-mon[75003]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  4 05:14:04 np0005545273 ceph-mon[75003]: from='client.? 192.168.122.100:0/1561490116' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec  4 05:14:04 np0005545273 ceph-mon[75003]: from='client.? 192.168.122.100:0/1561490116' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  4 05:14:04 np0005545273 ceph-mon[75003]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:14:04 np0005545273 ceph-mon[75003]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3393911737' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:14:04 np0005545273 systemd[1]: libpod-2ecb3831394af0b24f6c3da0d136b83d8b754b07298dba6caed99bcd87d3fd23.scope: Deactivated successfully.
Dec  4 05:14:04 np0005545273 podman[75149]: 2025-12-04 10:14:04.670308706 +0000 UTC m=+0.341637972 container died 2ecb3831394af0b24f6c3da0d136b83d8b754b07298dba6caed99bcd87d3fd23 (image=quay.io/ceph/ceph:v20, name=modest_yalow, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:14:04 np0005545273 systemd[1]: var-lib-containers-storage-overlay-31e5fd1e4e6dfa9123edd46814333058ddf64b3621f930ae824c7115ad9c0cd5-merged.mount: Deactivated successfully.
Dec  4 05:14:04 np0005545273 podman[75149]: 2025-12-04 10:14:04.942247562 +0000 UTC m=+0.613576838 container remove 2ecb3831394af0b24f6c3da0d136b83d8b754b07298dba6caed99bcd87d3fd23 (image=quay.io/ceph/ceph:v20, name=modest_yalow, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec  4 05:14:04 np0005545273 systemd[1]: Stopping Ceph mon.compute-0 for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d...
Dec  4 05:14:05 np0005545273 systemd[1]: libpod-conmon-2ecb3831394af0b24f6c3da0d136b83d8b754b07298dba6caed99bcd87d3fd23.scope: Deactivated successfully.
Dec  4 05:14:05 np0005545273 ceph-mon[75003]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec  4 05:14:05 np0005545273 ceph-mon[75003]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec  4 05:14:05 np0005545273 ceph-mon[75003]: mon.compute-0@0(leader) e1 shutdown
Dec  4 05:14:05 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0[74999]: 2025-12-04T10:14:05.226+0000 7f8431168640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec  4 05:14:05 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0[74999]: 2025-12-04T10:14:05.226+0000 7f8431168640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec  4 05:14:05 np0005545273 ceph-mon[75003]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec  4 05:14:05 np0005545273 ceph-mon[75003]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec  4 05:14:05 np0005545273 podman[75234]: 2025-12-04 10:14:05.310168502 +0000 UTC m=+0.137919257 container died d32677119db7471630ed10d34a82476d263c78d0396f8f37dbe667237f467993 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:14:05 np0005545273 systemd[1]: var-lib-containers-storage-overlay-a186da268ee28948e4ecbb6a0d1a8bacd43222e0fe25b37c3d0105318d31b593-merged.mount: Deactivated successfully.
Dec  4 05:14:05 np0005545273 podman[75234]: 2025-12-04 10:14:05.476483007 +0000 UTC m=+0.304233772 container remove d32677119db7471630ed10d34a82476d263c78d0396f8f37dbe667237f467993 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:14:05 np0005545273 bash[75234]: ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0
Dec  4 05:14:05 np0005545273 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  4 05:14:05 np0005545273 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  4 05:14:05 np0005545273 systemd[1]: ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d@mon.compute-0.service: Deactivated successfully.
Dec  4 05:14:05 np0005545273 systemd[1]: Stopped Ceph mon.compute-0 for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d.
Dec  4 05:14:05 np0005545273 systemd[1]: ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d@mon.compute-0.service: Consumed 1.113s CPU time.
Dec  4 05:14:05 np0005545273 systemd[1]: Starting Ceph mon.compute-0 for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d...
Dec  4 05:14:05 np0005545273 podman[75338]: 2025-12-04 10:14:05.863185673 +0000 UTC m=+0.026869684 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:06 np0005545273 podman[75338]: 2025-12-04 10:14:06.178159007 +0000 UTC m=+0.341842968 container create 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:14:06 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e6b0bbf900070899083a15aeddc410b61723de45c5b8ba83bd59565f9d3ea1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:06 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e6b0bbf900070899083a15aeddc410b61723de45c5b8ba83bd59565f9d3ea1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:06 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e6b0bbf900070899083a15aeddc410b61723de45c5b8ba83bd59565f9d3ea1f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:06 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e6b0bbf900070899083a15aeddc410b61723de45c5b8ba83bd59565f9d3ea1f/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:06 np0005545273 podman[75338]: 2025-12-04 10:14:06.357745946 +0000 UTC m=+0.521429957 container init 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec  4 05:14:06 np0005545273 podman[75338]: 2025-12-04 10:14:06.368766473 +0000 UTC m=+0.532450434 container start 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec  4 05:14:06 np0005545273 bash[75338]: 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88
Dec  4 05:14:06 np0005545273 systemd[1]: Started Ceph mon.compute-0 for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d.
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: set uid:gid to 167:167 (ceph:ceph)
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: pidfile_write: ignore empty --pid-file
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: load: jerasure load: lrc 
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: RocksDB version: 7.9.2
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: Git sha 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: Compile date 2025-10-30 15:42:43
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: DB SUMMARY
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: DB Session ID:  Y30CWPND84TKXOFWI6NG
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: CURRENT file:  CURRENT
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: IDENTITY file:  IDENTITY
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 60239 ; 
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                         Options.error_if_exists: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                       Options.create_if_missing: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                         Options.paranoid_checks: 1
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                                     Options.env: 0x56349edf6440
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                                      Options.fs: PosixFileSystem
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                                Options.info_log: 0x56349f85fe80
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                Options.max_file_opening_threads: 16
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                              Options.statistics: (nil)
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                               Options.use_fsync: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                       Options.max_log_file_size: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                         Options.allow_fallocate: 1
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                        Options.use_direct_reads: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:          Options.create_missing_column_families: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                              Options.db_log_dir: 
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                                 Options.wal_dir: 
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                   Options.advise_random_on_open: 1
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                    Options.write_buffer_manager: 0x56349f8aa140
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                            Options.rate_limiter: (nil)
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                  Options.unordered_write: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                               Options.row_cache: None
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                              Options.wal_filter: None
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:             Options.allow_ingest_behind: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:             Options.two_write_queues: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:             Options.manual_wal_flush: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:             Options.wal_compression: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:             Options.atomic_flush: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                 Options.log_readahead_size: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:             Options.allow_data_in_errors: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:             Options.db_host_id: __hostname__
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:             Options.max_background_jobs: 2
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:             Options.max_background_compactions: -1
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:             Options.max_subcompactions: 1
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:             Options.max_total_wal_size: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                          Options.max_open_files: -1
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                          Options.bytes_per_sync: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:       Options.compaction_readahead_size: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                  Options.max_background_flushes: -1
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: Compression algorithms supported:
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: #011kZSTD supported: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: #011kXpressCompression supported: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: #011kBZip2Compression supported: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: #011kLZ4Compression supported: 1
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: #011kZlibCompression supported: 1
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: #011kSnappyCompression supported: 1
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:           Options.merge_operator: 
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:        Options.compaction_filter: None
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56349f8b6a00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56349f89b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:        Options.write_buffer_size: 33554432
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:  Options.max_write_buffer_number: 2
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:          Options.compression: NoCompression
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:             Options.num_levels: 7
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1bea4932-39ce-4c6c-8b9b-253595ae5108
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843246409136, "job": 1, "event": "recovery_started", "wal_files": [9]}
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843246413387, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 59960, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 143, "table_properties": {"data_size": 58438, "index_size": 164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3403, "raw_average_key_size": 30, "raw_value_size": 55790, "raw_average_value_size": 507, "num_data_blocks": 9, "num_entries": 110, "num_filter_entries": 110, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843246, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843246413483, "job": 1, "event": "recovery_finished"}
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56349f8c8e00
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: DB pointer 0x56349fa12000
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   60.45 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     14.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0   60.45 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     14.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     14.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 3.91 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 3.91 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56349f89b8d0#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(2,0.95 KB,0.000181794%)#012#012** File Read Latency Histogram By Level [default] **
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: mon.compute-0@-1(???) e1 preinit fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: mon.compute-0@-1(???).mds e1 new map
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: mon.compute-0@-1(???).mds e1 print_map#012e1#012btime 2025-12-04T10:14:03:532003+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: mon.compute-0@0(probing) e1 win_standalone_election
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : monmap epoch 1
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : last_changed 2025-12-04T10:14:01.294217+0000
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : created 2025-12-04T10:14:01.294217+0000
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : fsmap 
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec  4 05:14:06 np0005545273 podman[75359]: 2025-12-04 10:14:06.486761843 +0000 UTC m=+0.059559859 container create e9d869578e7c95c57feb2dae55a8cc0509c8e4b3131230aef83d297ba7c0e8df (image=quay.io/ceph/ceph:v20, name=gallant_lehmann, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  4 05:14:06 np0005545273 systemd[1]: Started libpod-conmon-e9d869578e7c95c57feb2dae55a8cc0509c8e4b3131230aef83d297ba7c0e8df.scope.
Dec  4 05:14:06 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:06 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/490e23c5b636d3bfadde5f7ba4251a47dbb32e676c599743124f91b0c040f38b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:06 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/490e23c5b636d3bfadde5f7ba4251a47dbb32e676c599743124f91b0c040f38b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:06 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/490e23c5b636d3bfadde5f7ba4251a47dbb32e676c599743124f91b0c040f38b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:06 np0005545273 podman[75359]: 2025-12-04 10:14:06.472509646 +0000 UTC m=+0.045307682 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:06 np0005545273 podman[75359]: 2025-12-04 10:14:06.567557098 +0000 UTC m=+0.140355174 container init e9d869578e7c95c57feb2dae55a8cc0509c8e4b3131230aef83d297ba7c0e8df (image=quay.io/ceph/ceph:v20, name=gallant_lehmann, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:14:06 np0005545273 podman[75359]: 2025-12-04 10:14:06.579698203 +0000 UTC m=+0.152496219 container start e9d869578e7c95c57feb2dae55a8cc0509c8e4b3131230aef83d297ba7c0e8df (image=quay.io/ceph/ceph:v20, name=gallant_lehmann, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:14:06 np0005545273 podman[75359]: 2025-12-04 10:14:06.586060299 +0000 UTC m=+0.158858345 container attach e9d869578e7c95c57feb2dae55a8cc0509c8e4b3131230aef83d297ba7c0e8df (image=quay.io/ceph/ceph:v20, name=gallant_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:14:06 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Dec  4 05:14:06 np0005545273 systemd[1]: libpod-e9d869578e7c95c57feb2dae55a8cc0509c8e4b3131230aef83d297ba7c0e8df.scope: Deactivated successfully.
Dec  4 05:14:06 np0005545273 conmon[75413]: conmon e9d869578e7c95c57feb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e9d869578e7c95c57feb2dae55a8cc0509c8e4b3131230aef83d297ba7c0e8df.scope/container/memory.events
Dec  4 05:14:06 np0005545273 podman[75359]: 2025-12-04 10:14:06.788444591 +0000 UTC m=+0.361242627 container died e9d869578e7c95c57feb2dae55a8cc0509c8e4b3131230aef83d297ba7c0e8df (image=quay.io/ceph/ceph:v20, name=gallant_lehmann, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:14:06 np0005545273 systemd[1]: var-lib-containers-storage-overlay-490e23c5b636d3bfadde5f7ba4251a47dbb32e676c599743124f91b0c040f38b-merged.mount: Deactivated successfully.
Dec  4 05:14:06 np0005545273 podman[75359]: 2025-12-04 10:14:06.846163655 +0000 UTC m=+0.418961691 container remove e9d869578e7c95c57feb2dae55a8cc0509c8e4b3131230aef83d297ba7c0e8df (image=quay.io/ceph/ceph:v20, name=gallant_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec  4 05:14:06 np0005545273 systemd[1]: libpod-conmon-e9d869578e7c95c57feb2dae55a8cc0509c8e4b3131230aef83d297ba7c0e8df.scope: Deactivated successfully.
Dec  4 05:14:06 np0005545273 podman[75451]: 2025-12-04 10:14:06.928539149 +0000 UTC m=+0.059797726 container create e6e8d18e722df22a25694496b029bdacdff5cb5875682c62ca2ce19bfa40f51d (image=quay.io/ceph/ceph:v20, name=vibrant_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:14:06 np0005545273 systemd[1]: Started libpod-conmon-e6e8d18e722df22a25694496b029bdacdff5cb5875682c62ca2ce19bfa40f51d.scope.
Dec  4 05:14:06 np0005545273 podman[75451]: 2025-12-04 10:14:06.895197398 +0000 UTC m=+0.026456055 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:07 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:07 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcc49d7e7ee9d61f7d2ed2bfa753d593abc941fa607c48f013ffa1d10faf9a4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:07 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcc49d7e7ee9d61f7d2ed2bfa753d593abc941fa607c48f013ffa1d10faf9a4b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:07 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcc49d7e7ee9d61f7d2ed2bfa753d593abc941fa607c48f013ffa1d10faf9a4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:07 np0005545273 podman[75451]: 2025-12-04 10:14:07.037729955 +0000 UTC m=+0.168988582 container init e6e8d18e722df22a25694496b029bdacdff5cb5875682c62ca2ce19bfa40f51d (image=quay.io/ceph/ceph:v20, name=vibrant_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:14:07 np0005545273 podman[75451]: 2025-12-04 10:14:07.049300316 +0000 UTC m=+0.180558863 container start e6e8d18e722df22a25694496b029bdacdff5cb5875682c62ca2ce19bfa40f51d (image=quay.io/ceph/ceph:v20, name=vibrant_allen, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  4 05:14:07 np0005545273 podman[75451]: 2025-12-04 10:14:07.05272984 +0000 UTC m=+0.183988387 container attach e6e8d18e722df22a25694496b029bdacdff5cb5875682c62ca2ce19bfa40f51d (image=quay.io/ceph/ceph:v20, name=vibrant_allen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:14:07 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Dec  4 05:14:07 np0005545273 systemd[1]: libpod-e6e8d18e722df22a25694496b029bdacdff5cb5875682c62ca2ce19bfa40f51d.scope: Deactivated successfully.
Dec  4 05:14:07 np0005545273 podman[75451]: 2025-12-04 10:14:07.320436762 +0000 UTC m=+0.451695369 container died e6e8d18e722df22a25694496b029bdacdff5cb5875682c62ca2ce19bfa40f51d (image=quay.io/ceph/ceph:v20, name=vibrant_allen, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  4 05:14:07 np0005545273 systemd[1]: var-lib-containers-storage-overlay-bcc49d7e7ee9d61f7d2ed2bfa753d593abc941fa607c48f013ffa1d10faf9a4b-merged.mount: Deactivated successfully.
Dec  4 05:14:07 np0005545273 podman[75451]: 2025-12-04 10:14:07.367080706 +0000 UTC m=+0.498339253 container remove e6e8d18e722df22a25694496b029bdacdff5cb5875682c62ca2ce19bfa40f51d (image=quay.io/ceph/ceph:v20, name=vibrant_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec  4 05:14:07 np0005545273 systemd[1]: libpod-conmon-e6e8d18e722df22a25694496b029bdacdff5cb5875682c62ca2ce19bfa40f51d.scope: Deactivated successfully.
Dec  4 05:14:07 np0005545273 systemd[1]: Reloading.
Dec  4 05:14:07 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:14:07 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:14:08 np0005545273 systemd[1]: Reloading.
Dec  4 05:14:08 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:14:08 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:14:08 np0005545273 systemd[1]: Starting Ceph mgr.compute-0.iwufnj for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d...
Dec  4 05:14:08 np0005545273 podman[75631]: 2025-12-04 10:14:08.707275798 +0000 UTC m=+0.074568915 container create aa9fc7b1d662f69b2a978cfdf463b7d7981b2b6c84d1dea291388aff96f8a8ca (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:14:08 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7fb9ea260250f31b964bda5ea2a93d990d10583b09e1c9b2e05d713b716db8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:08 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7fb9ea260250f31b964bda5ea2a93d990d10583b09e1c9b2e05d713b716db8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:08 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7fb9ea260250f31b964bda5ea2a93d990d10583b09e1c9b2e05d713b716db8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:08 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7fb9ea260250f31b964bda5ea2a93d990d10583b09e1c9b2e05d713b716db8f/merged/var/lib/ceph/mgr/ceph-compute-0.iwufnj supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:08 np0005545273 podman[75631]: 2025-12-04 10:14:08.676679734 +0000 UTC m=+0.043972901 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:08 np0005545273 podman[75631]: 2025-12-04 10:14:08.793598808 +0000 UTC m=+0.160891945 container init aa9fc7b1d662f69b2a978cfdf463b7d7981b2b6c84d1dea291388aff96f8a8ca (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:14:08 np0005545273 podman[75631]: 2025-12-04 10:14:08.811333219 +0000 UTC m=+0.178626296 container start aa9fc7b1d662f69b2a978cfdf463b7d7981b2b6c84d1dea291388aff96f8a8ca (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec  4 05:14:08 np0005545273 bash[75631]: aa9fc7b1d662f69b2a978cfdf463b7d7981b2b6c84d1dea291388aff96f8a8ca
Dec  4 05:14:08 np0005545273 systemd[1]: Started Ceph mgr.compute-0.iwufnj for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d.
Dec  4 05:14:08 np0005545273 ceph-mgr[75651]: set uid:gid to 167:167 (ceph:ceph)
Dec  4 05:14:08 np0005545273 ceph-mgr[75651]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Dec  4 05:14:08 np0005545273 ceph-mgr[75651]: pidfile_write: ignore empty --pid-file
Dec  4 05:14:08 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'alerts'
Dec  4 05:14:08 np0005545273 podman[75652]: 2025-12-04 10:14:08.949144131 +0000 UTC m=+0.073449947 container create 21e73cc1de71dca8b09b793e35d435cc2b7ce004f4c0234ff46135df58f91a24 (image=quay.io/ceph/ceph:v20, name=laughing_banach, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:14:08 np0005545273 systemd[1]: Started libpod-conmon-21e73cc1de71dca8b09b793e35d435cc2b7ce004f4c0234ff46135df58f91a24.scope.
Dec  4 05:14:09 np0005545273 podman[75652]: 2025-12-04 10:14:08.919561462 +0000 UTC m=+0.043867348 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:09 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:09 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3cc44e2fe907efe76928b07c33abffe21d31091f697c85f9be8160700e9b675/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:09 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3cc44e2fe907efe76928b07c33abffe21d31091f697c85f9be8160700e9b675/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:09 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3cc44e2fe907efe76928b07c33abffe21d31091f697c85f9be8160700e9b675/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:09 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'balancer'
Dec  4 05:14:09 np0005545273 podman[75652]: 2025-12-04 10:14:09.055068628 +0000 UTC m=+0.179374454 container init 21e73cc1de71dca8b09b793e35d435cc2b7ce004f4c0234ff46135df58f91a24 (image=quay.io/ceph/ceph:v20, name=laughing_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:14:09 np0005545273 podman[75652]: 2025-12-04 10:14:09.068743091 +0000 UTC m=+0.193048877 container start 21e73cc1de71dca8b09b793e35d435cc2b7ce004f4c0234ff46135df58f91a24 (image=quay.io/ceph/ceph:v20, name=laughing_banach, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:14:09 np0005545273 podman[75652]: 2025-12-04 10:14:09.072695327 +0000 UTC m=+0.197001163 container attach 21e73cc1de71dca8b09b793e35d435cc2b7ce004f4c0234ff46135df58f91a24 (image=quay.io/ceph/ceph:v20, name=laughing_banach, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:14:09 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'cephadm'
Dec  4 05:14:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec  4 05:14:09 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/641450880' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec  4 05:14:09 np0005545273 laughing_banach[75689]: 
Dec  4 05:14:09 np0005545273 laughing_banach[75689]: {
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:    "fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:    "health": {
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:        "status": "HEALTH_OK",
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:        "checks": {},
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:        "mutes": []
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:    },
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:    "election_epoch": 5,
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:    "quorum": [
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:        0
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:    ],
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:    "quorum_names": [
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:        "compute-0"
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:    ],
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:    "quorum_age": 2,
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:    "monmap": {
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:        "epoch": 1,
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:        "min_mon_release_name": "tentacle",
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:        "num_mons": 1
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:    },
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:    "osdmap": {
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:        "epoch": 1,
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:        "num_osds": 0,
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:        "num_up_osds": 0,
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:        "osd_up_since": 0,
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:        "num_in_osds": 0,
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:        "osd_in_since": 0,
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:        "num_remapped_pgs": 0
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:    },
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:    "pgmap": {
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:        "pgs_by_state": [],
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:        "num_pgs": 0,
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:        "num_pools": 0,
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:        "num_objects": 0,
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:        "data_bytes": 0,
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:        "bytes_used": 0,
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:        "bytes_avail": 0,
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:        "bytes_total": 0
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:    },
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:    "fsmap": {
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:        "epoch": 1,
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:        "btime": "2025-12-04T10:14:03:532003+0000",
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:        "by_rank": [],
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:        "up:standby": 0
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:    },
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:    "mgrmap": {
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:        "available": false,
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:        "num_standbys": 0,
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:        "modules": [
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:            "iostat",
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:            "nfs"
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:        ],
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:        "services": {}
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:    },
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:    "servicemap": {
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:        "epoch": 1,
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:        "modified": "2025-12-04T10:14:03.534445+0000",
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:        "services": {}
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:    },
Dec  4 05:14:09 np0005545273 laughing_banach[75689]:    "progress_events": {}
Dec  4 05:14:09 np0005545273 laughing_banach[75689]: }
Dec  4 05:14:09 np0005545273 systemd[1]: libpod-21e73cc1de71dca8b09b793e35d435cc2b7ce004f4c0234ff46135df58f91a24.scope: Deactivated successfully.
Dec  4 05:14:09 np0005545273 podman[75652]: 2025-12-04 10:14:09.303057831 +0000 UTC m=+0.427363637 container died 21e73cc1de71dca8b09b793e35d435cc2b7ce004f4c0234ff46135df58f91a24 (image=quay.io/ceph/ceph:v20, name=laughing_banach, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  4 05:14:09 np0005545273 systemd[1]: var-lib-containers-storage-overlay-b3cc44e2fe907efe76928b07c33abffe21d31091f697c85f9be8160700e9b675-merged.mount: Deactivated successfully.
Dec  4 05:14:09 np0005545273 podman[75652]: 2025-12-04 10:14:09.347219905 +0000 UTC m=+0.471525711 container remove 21e73cc1de71dca8b09b793e35d435cc2b7ce004f4c0234ff46135df58f91a24 (image=quay.io/ceph/ceph:v20, name=laughing_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:14:09 np0005545273 systemd[1]: libpod-conmon-21e73cc1de71dca8b09b793e35d435cc2b7ce004f4c0234ff46135df58f91a24.scope: Deactivated successfully.
Dec  4 05:14:09 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'crash'
Dec  4 05:14:10 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'dashboard'
Dec  4 05:14:10 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'devicehealth'
Dec  4 05:14:10 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'diskprediction_local'
Dec  4 05:14:10 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  4 05:14:10 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  4 05:14:10 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]:  from numpy import show_config as show_numpy_config
Dec  4 05:14:10 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'influx'
Dec  4 05:14:11 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'insights'
Dec  4 05:14:11 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'iostat'
Dec  4 05:14:11 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'k8sevents'
Dec  4 05:14:11 np0005545273 podman[75739]: 2025-12-04 10:14:11.42631518 +0000 UTC m=+0.053541244 container create ae70b16e7ffcdf72be5feef0cae0d089c41288dcec867e2ea9e2cb243d9bef59 (image=quay.io/ceph/ceph:v20, name=suspicious_maxwell, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:14:11 np0005545273 systemd[1]: Started libpod-conmon-ae70b16e7ffcdf72be5feef0cae0d089c41288dcec867e2ea9e2cb243d9bef59.scope.
Dec  4 05:14:11 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:11 np0005545273 podman[75739]: 2025-12-04 10:14:11.400919252 +0000 UTC m=+0.028145366 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:11 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c3c2637c062b5ce5675f43076ddbfdfdc909e1413e9853cebc42335f91286d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:11 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c3c2637c062b5ce5675f43076ddbfdfdc909e1413e9853cebc42335f91286d9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:11 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c3c2637c062b5ce5675f43076ddbfdfdc909e1413e9853cebc42335f91286d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:11 np0005545273 podman[75739]: 2025-12-04 10:14:11.512962598 +0000 UTC m=+0.140188672 container init ae70b16e7ffcdf72be5feef0cae0d089c41288dcec867e2ea9e2cb243d9bef59 (image=quay.io/ceph/ceph:v20, name=suspicious_maxwell, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:14:11 np0005545273 podman[75739]: 2025-12-04 10:14:11.522911421 +0000 UTC m=+0.150137495 container start ae70b16e7ffcdf72be5feef0cae0d089c41288dcec867e2ea9e2cb243d9bef59 (image=quay.io/ceph/ceph:v20, name=suspicious_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:14:11 np0005545273 podman[75739]: 2025-12-04 10:14:11.533022726 +0000 UTC m=+0.160248790 container attach ae70b16e7ffcdf72be5feef0cae0d089c41288dcec867e2ea9e2cb243d9bef59 (image=quay.io/ceph/ceph:v20, name=suspicious_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  4 05:14:11 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'localpool'
Dec  4 05:14:11 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'mds_autoscaler'
Dec  4 05:14:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec  4 05:14:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/927057361' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]: 
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]: {
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:    "fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:    "health": {
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:        "status": "HEALTH_OK",
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:        "checks": {},
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:        "mutes": []
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:    },
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:    "election_epoch": 5,
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:    "quorum": [
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:        0
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:    ],
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:    "quorum_names": [
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:        "compute-0"
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:    ],
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:    "quorum_age": 5,
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:    "monmap": {
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:        "epoch": 1,
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:        "min_mon_release_name": "tentacle",
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:        "num_mons": 1
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:    },
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:    "osdmap": {
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:        "epoch": 1,
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:        "num_osds": 0,
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:        "num_up_osds": 0,
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:        "osd_up_since": 0,
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:        "num_in_osds": 0,
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:        "osd_in_since": 0,
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:        "num_remapped_pgs": 0
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:    },
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:    "pgmap": {
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:        "pgs_by_state": [],
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:        "num_pgs": 0,
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:        "num_pools": 0,
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:        "num_objects": 0,
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:        "data_bytes": 0,
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:        "bytes_used": 0,
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:        "bytes_avail": 0,
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:        "bytes_total": 0
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:    },
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:    "fsmap": {
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:        "epoch": 1,
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:        "btime": "2025-12-04T10:14:03:532003+0000",
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:        "by_rank": [],
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:        "up:standby": 0
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:    },
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:    "mgrmap": {
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:        "available": false,
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:        "num_standbys": 0,
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:        "modules": [
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:            "iostat",
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:            "nfs"
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:        ],
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:        "services": {}
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:    },
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:    "servicemap": {
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:        "epoch": 1,
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:        "modified": "2025-12-04T10:14:03.534445+0000",
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:        "services": {}
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:    },
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]:    "progress_events": {}
Dec  4 05:14:11 np0005545273 suspicious_maxwell[75755]: }
Dec  4 05:14:11 np0005545273 systemd[1]: libpod-ae70b16e7ffcdf72be5feef0cae0d089c41288dcec867e2ea9e2cb243d9bef59.scope: Deactivated successfully.
Dec  4 05:14:11 np0005545273 podman[75739]: 2025-12-04 10:14:11.730417352 +0000 UTC m=+0.357643416 container died ae70b16e7ffcdf72be5feef0cae0d089c41288dcec867e2ea9e2cb243d9bef59 (image=quay.io/ceph/ceph:v20, name=suspicious_maxwell, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:14:11 np0005545273 systemd[1]: var-lib-containers-storage-overlay-6c3c2637c062b5ce5675f43076ddbfdfdc909e1413e9853cebc42335f91286d9-merged.mount: Deactivated successfully.
Dec  4 05:14:11 np0005545273 podman[75739]: 2025-12-04 10:14:11.768016249 +0000 UTC m=+0.395242313 container remove ae70b16e7ffcdf72be5feef0cae0d089c41288dcec867e2ea9e2cb243d9bef59 (image=quay.io/ceph/ceph:v20, name=suspicious_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  4 05:14:11 np0005545273 systemd[1]: libpod-conmon-ae70b16e7ffcdf72be5feef0cae0d089c41288dcec867e2ea9e2cb243d9bef59.scope: Deactivated successfully.
Dec  4 05:14:11 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'mirroring'
Dec  4 05:14:12 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'nfs'
Dec  4 05:14:12 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'orchestrator'
Dec  4 05:14:12 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'osd_perf_query'
Dec  4 05:14:12 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'osd_support'
Dec  4 05:14:12 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'pg_autoscaler'
Dec  4 05:14:12 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'progress'
Dec  4 05:14:12 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'prometheus'
Dec  4 05:14:13 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'rbd_support'
Dec  4 05:14:13 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'rgw'
Dec  4 05:14:13 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'rook'
Dec  4 05:14:13 np0005545273 podman[75796]: 2025-12-04 10:14:13.815399267 +0000 UTC m=+0.024399412 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:13 np0005545273 podman[75796]: 2025-12-04 10:14:13.981493499 +0000 UTC m=+0.190493544 container create 7b196d33eb510d5c749473cca075714805ccc3db62098a70877dd42d6f22033e (image=quay.io/ceph/ceph:v20, name=peaceful_diffie, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  4 05:14:14 np0005545273 systemd[1]: Started libpod-conmon-7b196d33eb510d5c749473cca075714805ccc3db62098a70877dd42d6f22033e.scope.
Dec  4 05:14:14 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:14 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c422c36129e4a94e7b9fb66fea28e8d189e891abbb50ddf530a8c115d8159ef4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:14 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c422c36129e4a94e7b9fb66fea28e8d189e891abbb50ddf530a8c115d8159ef4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:14 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c422c36129e4a94e7b9fb66fea28e8d189e891abbb50ddf530a8c115d8159ef4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:14 np0005545273 podman[75796]: 2025-12-04 10:14:14.071346927 +0000 UTC m=+0.280347052 container init 7b196d33eb510d5c749473cca075714805ccc3db62098a70877dd42d6f22033e (image=quay.io/ceph/ceph:v20, name=peaceful_diffie, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Dec  4 05:14:14 np0005545273 podman[75796]: 2025-12-04 10:14:14.077877914 +0000 UTC m=+0.286877959 container start 7b196d33eb510d5c749473cca075714805ccc3db62098a70877dd42d6f22033e (image=quay.io/ceph/ceph:v20, name=peaceful_diffie, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:14:14 np0005545273 podman[75796]: 2025-12-04 10:14:14.081830505 +0000 UTC m=+0.290830570 container attach 7b196d33eb510d5c749473cca075714805ccc3db62098a70877dd42d6f22033e (image=quay.io/ceph/ceph:v20, name=peaceful_diffie, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:14:14 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'selftest'
Dec  4 05:14:14 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'smb'
Dec  4 05:14:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec  4 05:14:14 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2421471824' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]: 
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]: {
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:    "fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:    "health": {
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:        "status": "HEALTH_OK",
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:        "checks": {},
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:        "mutes": []
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:    },
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:    "election_epoch": 5,
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:    "quorum": [
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:        0
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:    ],
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:    "quorum_names": [
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:        "compute-0"
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:    ],
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:    "quorum_age": 7,
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:    "monmap": {
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:        "epoch": 1,
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:        "min_mon_release_name": "tentacle",
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:        "num_mons": 1
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:    },
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:    "osdmap": {
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:        "epoch": 1,
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:        "num_osds": 0,
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:        "num_up_osds": 0,
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:        "osd_up_since": 0,
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:        "num_in_osds": 0,
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:        "osd_in_since": 0,
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:        "num_remapped_pgs": 0
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:    },
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:    "pgmap": {
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:        "pgs_by_state": [],
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:        "num_pgs": 0,
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:        "num_pools": 0,
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:        "num_objects": 0,
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:        "data_bytes": 0,
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:        "bytes_used": 0,
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:        "bytes_avail": 0,
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:        "bytes_total": 0
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:    },
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:    "fsmap": {
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:        "epoch": 1,
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:        "btime": "2025-12-04T10:14:03:532003+0000",
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:        "by_rank": [],
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:        "up:standby": 0
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:    },
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:    "mgrmap": {
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:        "available": false,
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:        "num_standbys": 0,
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:        "modules": [
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:            "iostat",
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:            "nfs"
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:        ],
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:        "services": {}
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:    },
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:    "servicemap": {
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:        "epoch": 1,
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:        "modified": "2025-12-04T10:14:03.534445+0000",
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:        "services": {}
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:    },
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]:    "progress_events": {}
Dec  4 05:14:14 np0005545273 peaceful_diffie[75812]: }
Dec  4 05:14:14 np0005545273 systemd[1]: libpod-7b196d33eb510d5c749473cca075714805ccc3db62098a70877dd42d6f22033e.scope: Deactivated successfully.
Dec  4 05:14:14 np0005545273 podman[75796]: 2025-12-04 10:14:14.301189337 +0000 UTC m=+0.510189382 container died 7b196d33eb510d5c749473cca075714805ccc3db62098a70877dd42d6f22033e (image=quay.io/ceph/ceph:v20, name=peaceful_diffie, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:14:14 np0005545273 systemd[1]: var-lib-containers-storage-overlay-c422c36129e4a94e7b9fb66fea28e8d189e891abbb50ddf530a8c115d8159ef4-merged.mount: Deactivated successfully.
Dec  4 05:14:14 np0005545273 podman[75796]: 2025-12-04 10:14:14.334530167 +0000 UTC m=+0.543530202 container remove 7b196d33eb510d5c749473cca075714805ccc3db62098a70877dd42d6f22033e (image=quay.io/ceph/ceph:v20, name=peaceful_diffie, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  4 05:14:14 np0005545273 systemd[1]: libpod-conmon-7b196d33eb510d5c749473cca075714805ccc3db62098a70877dd42d6f22033e.scope: Deactivated successfully.
Dec  4 05:14:14 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'snap_schedule'
Dec  4 05:14:14 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'stats'
Dec  4 05:14:14 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'status'
Dec  4 05:14:14 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'telegraf'
Dec  4 05:14:14 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'telemetry'
Dec  4 05:14:14 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'test_orchestrator'
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'volumes'
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: ms_deliver_dispatch: unhandled message 0x5563bf9e7860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec  4 05:14:15 np0005545273 ceph-mon[75358]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.iwufnj
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: mgr handle_mgr_map Activating!
Dec  4 05:14:15 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.iwufnj(active, starting, since 0.011833s)
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: mgr handle_mgr_map I am now activating
Dec  4 05:14:15 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec  4 05:14:15 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2823957885' entity='mgr.compute-0.iwufnj' cmd={"prefix": "mds metadata"} : dispatch
Dec  4 05:14:15 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).mds e1 all = 1
Dec  4 05:14:15 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec  4 05:14:15 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2823957885' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata"} : dispatch
Dec  4 05:14:15 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec  4 05:14:15 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2823957885' entity='mgr.compute-0.iwufnj' cmd={"prefix": "mon metadata"} : dispatch
Dec  4 05:14:15 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  4 05:14:15 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2823957885' entity='mgr.compute-0.iwufnj' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec  4 05:14:15 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.iwufnj", "id": "compute-0.iwufnj"} v 0)
Dec  4 05:14:15 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2823957885' entity='mgr.compute-0.iwufnj' cmd={"prefix": "mgr metadata", "who": "compute-0.iwufnj", "id": "compute-0.iwufnj"} : dispatch
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: mgr load Constructed class from module: balancer
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: [balancer INFO root] Starting
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: mgr load Constructed class from module: crash
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:14:15
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:14:15 np0005545273 ceph-mon[75358]: log_channel(cluster) log [INF] : Manager daemon compute-0.iwufnj is now available
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: [balancer INFO root] No pools available
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: mgr load Constructed class from module: devicehealth
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: [devicehealth INFO root] Starting
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: mgr load Constructed class from module: iostat
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: mgr load Constructed class from module: nfs
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: mgr load Constructed class from module: orchestrator
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: mgr load Constructed class from module: pg_autoscaler
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: mgr load Constructed class from module: progress
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: [progress INFO root] Loading...
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: [progress INFO root] No stored events to load
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: [progress INFO root] Loaded [] historic events
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: [progress INFO root] Loaded OSDMap, ready.
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] recovery thread starting
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] starting setup
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: mgr load Constructed class from module: rbd_support
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: mgr load Constructed class from module: status
Dec  4 05:14:15 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.iwufnj/mirror_snapshot_schedule"} v 0)
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  4 05:14:15 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2823957885' entity='mgr.compute-0.iwufnj' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.iwufnj/mirror_snapshot_schedule"} : dispatch
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: mgr load Constructed class from module: telemetry
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  4 05:14:15 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] PerfHandler: starting
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TaskHandler: starting
Dec  4 05:14:15 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.iwufnj/trash_purge_schedule"} v 0)
Dec  4 05:14:15 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2823957885' entity='mgr.compute-0.iwufnj' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.iwufnj/trash_purge_schedule"} : dispatch
Dec  4 05:14:15 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2823957885' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:15 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] setup complete
Dec  4 05:14:15 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2823957885' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:15 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Dec  4 05:14:15 np0005545273 ceph-mgr[75651]: mgr load Constructed class from module: volumes
Dec  4 05:14:15 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2823957885' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:15 np0005545273 ceph-mon[75358]: Activating manager daemon compute-0.iwufnj
Dec  4 05:14:15 np0005545273 ceph-mon[75358]: Manager daemon compute-0.iwufnj is now available
Dec  4 05:14:15 np0005545273 ceph-mon[75358]: from='mgr.14102 192.168.122.100:0/2823957885' entity='mgr.compute-0.iwufnj' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.iwufnj/mirror_snapshot_schedule"} : dispatch
Dec  4 05:14:15 np0005545273 ceph-mon[75358]: from='mgr.14102 192.168.122.100:0/2823957885' entity='mgr.compute-0.iwufnj' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.iwufnj/trash_purge_schedule"} : dispatch
Dec  4 05:14:15 np0005545273 ceph-mon[75358]: from='mgr.14102 192.168.122.100:0/2823957885' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:15 np0005545273 ceph-mon[75358]: from='mgr.14102 192.168.122.100:0/2823957885' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:16 np0005545273 podman[75928]: 2025-12-04 10:14:16.407885082 +0000 UTC m=+0.050173034 container create 1827e52e2330ba521ef6c01ceb6ae5946b702e25830675d336aed04f3a46211e (image=quay.io/ceph/ceph:v20, name=quirky_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec  4 05:14:16 np0005545273 systemd[1]: Started libpod-conmon-1827e52e2330ba521ef6c01ceb6ae5946b702e25830675d336aed04f3a46211e.scope.
Dec  4 05:14:16 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:16 np0005545273 podman[75928]: 2025-12-04 10:14:16.390356727 +0000 UTC m=+0.032644719 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:16 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74a515a65ed79b38cdb8d2ce16687729ba1aa922264b09926e9d52c84819fc30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:16 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74a515a65ed79b38cdb8d2ce16687729ba1aa922264b09926e9d52c84819fc30/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:16 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74a515a65ed79b38cdb8d2ce16687729ba1aa922264b09926e9d52c84819fc30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:16 np0005545273 podman[75928]: 2025-12-04 10:14:16.506040591 +0000 UTC m=+0.148328643 container init 1827e52e2330ba521ef6c01ceb6ae5946b702e25830675d336aed04f3a46211e (image=quay.io/ceph/ceph:v20, name=quirky_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:14:16 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.iwufnj(active, since 1.03502s)
Dec  4 05:14:16 np0005545273 podman[75928]: 2025-12-04 10:14:16.517899244 +0000 UTC m=+0.160187196 container start 1827e52e2330ba521ef6c01ceb6ae5946b702e25830675d336aed04f3a46211e (image=quay.io/ceph/ceph:v20, name=quirky_feynman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  4 05:14:16 np0005545273 podman[75928]: 2025-12-04 10:14:16.522339684 +0000 UTC m=+0.164627666 container attach 1827e52e2330ba521ef6c01ceb6ae5946b702e25830675d336aed04f3a46211e (image=quay.io/ceph/ceph:v20, name=quirky_feynman, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:14:16 np0005545273 ceph-mon[75358]: from='mgr.14102 192.168.122.100:0/2823957885' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:17 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec  4 05:14:17 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4286944483' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]: 
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]: {
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:    "fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:    "health": {
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:        "status": "HEALTH_OK",
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:        "checks": {},
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:        "mutes": []
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:    },
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:    "election_epoch": 5,
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:    "quorum": [
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:        0
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:    ],
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:    "quorum_names": [
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:        "compute-0"
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:    ],
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:    "quorum_age": 10,
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:    "monmap": {
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:        "epoch": 1,
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:        "min_mon_release_name": "tentacle",
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:        "num_mons": 1
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:    },
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:    "osdmap": {
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:        "epoch": 1,
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:        "num_osds": 0,
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:        "num_up_osds": 0,
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:        "osd_up_since": 0,
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:        "num_in_osds": 0,
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:        "osd_in_since": 0,
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:        "num_remapped_pgs": 0
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:    },
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:    "pgmap": {
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:        "pgs_by_state": [],
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:        "num_pgs": 0,
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:        "num_pools": 0,
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:        "num_objects": 0,
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:        "data_bytes": 0,
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:        "bytes_used": 0,
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:        "bytes_avail": 0,
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:        "bytes_total": 0
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:    },
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:    "fsmap": {
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:        "epoch": 1,
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:        "btime": "2025-12-04T10:14:03:532003+0000",
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:        "by_rank": [],
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:        "up:standby": 0
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:    },
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:    "mgrmap": {
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:        "available": true,
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:        "num_standbys": 0,
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:        "modules": [
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:            "iostat",
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:            "nfs"
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:        ],
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:        "services": {}
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:    },
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:    "servicemap": {
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:        "epoch": 1,
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:        "modified": "2025-12-04T10:14:03.534445+0000",
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:        "services": {}
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:    },
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]:    "progress_events": {}
Dec  4 05:14:17 np0005545273 quirky_feynman[75945]: }
Dec  4 05:14:17 np0005545273 systemd[1]: libpod-1827e52e2330ba521ef6c01ceb6ae5946b702e25830675d336aed04f3a46211e.scope: Deactivated successfully.
Dec  4 05:14:17 np0005545273 podman[75971]: 2025-12-04 10:14:17.074661703 +0000 UTC m=+0.022086159 container died 1827e52e2330ba521ef6c01ceb6ae5946b702e25830675d336aed04f3a46211e (image=quay.io/ceph/ceph:v20, name=quirky_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec  4 05:14:17 np0005545273 systemd[1]: var-lib-containers-storage-overlay-74a515a65ed79b38cdb8d2ce16687729ba1aa922264b09926e9d52c84819fc30-merged.mount: Deactivated successfully.
Dec  4 05:14:17 np0005545273 podman[75971]: 2025-12-04 10:14:17.119834486 +0000 UTC m=+0.067258932 container remove 1827e52e2330ba521ef6c01ceb6ae5946b702e25830675d336aed04f3a46211e (image=quay.io/ceph/ceph:v20, name=quirky_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  4 05:14:17 np0005545273 systemd[1]: libpod-conmon-1827e52e2330ba521ef6c01ceb6ae5946b702e25830675d336aed04f3a46211e.scope: Deactivated successfully.
Dec  4 05:14:17 np0005545273 podman[75986]: 2025-12-04 10:14:17.219213786 +0000 UTC m=+0.060213505 container create 45a1df8509f6083782b858ec9149d0af3fa5c36d0747e98d6de0492ed51e8aa0 (image=quay.io/ceph/ceph:v20, name=zen_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:14:17 np0005545273 systemd[1]: Started libpod-conmon-45a1df8509f6083782b858ec9149d0af3fa5c36d0747e98d6de0492ed51e8aa0.scope.
Dec  4 05:14:17 np0005545273 podman[75986]: 2025-12-04 10:14:17.192655578 +0000 UTC m=+0.033655367 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:17 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:17 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b844cd625f7052a097508e873aeb42ef72dd95f076d7143308c53345bc22b4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:17 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b844cd625f7052a097508e873aeb42ef72dd95f076d7143308c53345bc22b4a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:17 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b844cd625f7052a097508e873aeb42ef72dd95f076d7143308c53345bc22b4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:17 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b844cd625f7052a097508e873aeb42ef72dd95f076d7143308c53345bc22b4a/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:17 np0005545273 podman[75986]: 2025-12-04 10:14:17.315571572 +0000 UTC m=+0.156571291 container init 45a1df8509f6083782b858ec9149d0af3fa5c36d0747e98d6de0492ed51e8aa0 (image=quay.io/ceph/ceph:v20, name=zen_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:14:17 np0005545273 podman[75986]: 2025-12-04 10:14:17.322450596 +0000 UTC m=+0.163450325 container start 45a1df8509f6083782b858ec9149d0af3fa5c36d0747e98d6de0492ed51e8aa0 (image=quay.io/ceph/ceph:v20, name=zen_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  4 05:14:17 np0005545273 podman[75986]: 2025-12-04 10:14:17.327009418 +0000 UTC m=+0.168009147 container attach 45a1df8509f6083782b858ec9149d0af3fa5c36d0747e98d6de0492ed51e8aa0 (image=quay.io/ceph/ceph:v20, name=zen_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec  4 05:14:17 np0005545273 ceph-mgr[75651]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  4 05:14:17 np0005545273 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  4 05:14:17 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.iwufnj(active, since 2s)
Dec  4 05:14:17 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec  4 05:14:17 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2335252299' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec  4 05:14:17 np0005545273 zen_hertz[76002]: 
Dec  4 05:14:17 np0005545273 zen_hertz[76002]: [global]
Dec  4 05:14:17 np0005545273 zen_hertz[76002]: #011fsid = f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec  4 05:14:17 np0005545273 zen_hertz[76002]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec  4 05:14:17 np0005545273 zen_hertz[76002]: #011osd_crush_chooseleaf_type = 0
Dec  4 05:14:17 np0005545273 systemd[1]: libpod-45a1df8509f6083782b858ec9149d0af3fa5c36d0747e98d6de0492ed51e8aa0.scope: Deactivated successfully.
Dec  4 05:14:17 np0005545273 podman[75986]: 2025-12-04 10:14:17.836942712 +0000 UTC m=+0.677942451 container died 45a1df8509f6083782b858ec9149d0af3fa5c36d0747e98d6de0492ed51e8aa0 (image=quay.io/ceph/ceph:v20, name=zen_hertz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Dec  4 05:14:17 np0005545273 systemd[1]: var-lib-containers-storage-overlay-1b844cd625f7052a097508e873aeb42ef72dd95f076d7143308c53345bc22b4a-merged.mount: Deactivated successfully.
Dec  4 05:14:17 np0005545273 podman[75986]: 2025-12-04 10:14:17.874092562 +0000 UTC m=+0.715092281 container remove 45a1df8509f6083782b858ec9149d0af3fa5c36d0747e98d6de0492ed51e8aa0 (image=quay.io/ceph/ceph:v20, name=zen_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:14:17 np0005545273 systemd[1]: libpod-conmon-45a1df8509f6083782b858ec9149d0af3fa5c36d0747e98d6de0492ed51e8aa0.scope: Deactivated successfully.
Dec  4 05:14:17 np0005545273 podman[76040]: 2025-12-04 10:14:17.94453378 +0000 UTC m=+0.052008547 container create 2eaece45daa4866466df872c7d8aec0aa1202af884e679b62c0914d2a24adbba (image=quay.io/ceph/ceph:v20, name=friendly_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec  4 05:14:17 np0005545273 systemd[1]: Started libpod-conmon-2eaece45daa4866466df872c7d8aec0aa1202af884e679b62c0914d2a24adbba.scope.
Dec  4 05:14:18 np0005545273 podman[76040]: 2025-12-04 10:14:17.916297272 +0000 UTC m=+0.023772089 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:18 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:18 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a15c1abc30a599b809718469c2d96a2ffaab4de3e839b108d2dd38913e3394b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:18 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a15c1abc30a599b809718469c2d96a2ffaab4de3e839b108d2dd38913e3394b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:18 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a15c1abc30a599b809718469c2d96a2ffaab4de3e839b108d2dd38913e3394b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:18 np0005545273 podman[76040]: 2025-12-04 10:14:18.031346714 +0000 UTC m=+0.138821461 container init 2eaece45daa4866466df872c7d8aec0aa1202af884e679b62c0914d2a24adbba (image=quay.io/ceph/ceph:v20, name=friendly_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec  4 05:14:18 np0005545273 podman[76040]: 2025-12-04 10:14:18.036131 +0000 UTC m=+0.143605727 container start 2eaece45daa4866466df872c7d8aec0aa1202af884e679b62c0914d2a24adbba (image=quay.io/ceph/ceph:v20, name=friendly_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:14:18 np0005545273 podman[76040]: 2025-12-04 10:14:18.039225586 +0000 UTC m=+0.146700313 container attach 2eaece45daa4866466df872c7d8aec0aa1202af884e679b62c0914d2a24adbba (image=quay.io/ceph/ceph:v20, name=friendly_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec  4 05:14:18 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/2335252299' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec  4 05:14:18 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Dec  4 05:14:18 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2477013307' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Dec  4 05:14:19 np0005545273 ceph-mgr[75651]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  4 05:14:19 np0005545273 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  4 05:14:19 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/2477013307' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Dec  4 05:14:19 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2477013307' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec  4 05:14:19 np0005545273 ceph-mgr[75651]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec  4 05:14:19 np0005545273 ceph-mgr[75651]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec  4 05:14:19 np0005545273 ceph-mgr[75651]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec  4 05:14:19 np0005545273 ceph-mgr[75651]: mgr respawn  1: '-n'
Dec  4 05:14:19 np0005545273 ceph-mgr[75651]: mgr respawn  2: 'mgr.compute-0.iwufnj'
Dec  4 05:14:19 np0005545273 ceph-mgr[75651]: mgr respawn  3: '-f'
Dec  4 05:14:19 np0005545273 ceph-mgr[75651]: mgr respawn  4: '--setuser'
Dec  4 05:14:19 np0005545273 ceph-mgr[75651]: mgr respawn  5: 'ceph'
Dec  4 05:14:19 np0005545273 ceph-mgr[75651]: mgr respawn  6: '--setgroup'
Dec  4 05:14:19 np0005545273 ceph-mgr[75651]: mgr respawn  7: 'ceph'
Dec  4 05:14:19 np0005545273 ceph-mgr[75651]: mgr respawn  8: '--default-log-to-file=false'
Dec  4 05:14:19 np0005545273 ceph-mgr[75651]: mgr respawn  9: '--default-log-to-journald=true'
Dec  4 05:14:19 np0005545273 ceph-mgr[75651]: mgr respawn  10: '--default-log-to-stderr=false'
Dec  4 05:14:19 np0005545273 ceph-mgr[75651]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec  4 05:14:19 np0005545273 ceph-mgr[75651]: mgr respawn  exe_path /proc/self/exe
Dec  4 05:14:19 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.iwufnj(active, since 4s)
Dec  4 05:14:19 np0005545273 systemd[1]: libpod-2eaece45daa4866466df872c7d8aec0aa1202af884e679b62c0914d2a24adbba.scope: Deactivated successfully.
Dec  4 05:14:19 np0005545273 podman[76040]: 2025-12-04 10:14:19.62118107 +0000 UTC m=+1.728655837 container died 2eaece45daa4866466df872c7d8aec0aa1202af884e679b62c0914d2a24adbba (image=quay.io/ceph/ceph:v20, name=friendly_heisenberg, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:14:19 np0005545273 systemd[1]: var-lib-containers-storage-overlay-7a15c1abc30a599b809718469c2d96a2ffaab4de3e839b108d2dd38913e3394b-merged.mount: Deactivated successfully.
Dec  4 05:14:19 np0005545273 podman[76040]: 2025-12-04 10:14:19.667546616 +0000 UTC m=+1.775021363 container remove 2eaece45daa4866466df872c7d8aec0aa1202af884e679b62c0914d2a24adbba (image=quay.io/ceph/ceph:v20, name=friendly_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:14:19 np0005545273 systemd[1]: libpod-conmon-2eaece45daa4866466df872c7d8aec0aa1202af884e679b62c0914d2a24adbba.scope: Deactivated successfully.
Dec  4 05:14:19 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: ignoring --setuser ceph since I am not root
Dec  4 05:14:19 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: ignoring --setgroup ceph since I am not root
Dec  4 05:14:19 np0005545273 ceph-mgr[75651]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Dec  4 05:14:19 np0005545273 ceph-mgr[75651]: pidfile_write: ignore empty --pid-file
Dec  4 05:14:19 np0005545273 podman[76094]: 2025-12-04 10:14:19.731481907 +0000 UTC m=+0.045020252 container create 807e62daf08a5ec99e4b474393d9bd33558850f2cc7708bf56977bb6d406ca41 (image=quay.io/ceph/ceph:v20, name=magical_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  4 05:14:19 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'alerts'
Dec  4 05:14:19 np0005545273 systemd[1]: Started libpod-conmon-807e62daf08a5ec99e4b474393d9bd33558850f2cc7708bf56977bb6d406ca41.scope.
Dec  4 05:14:19 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:19 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7b3c6d7408f9dfd8b0fc79be157c05ad99335f2b47b78abb5d5b13ef639fa18/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:19 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7b3c6d7408f9dfd8b0fc79be157c05ad99335f2b47b78abb5d5b13ef639fa18/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:19 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7b3c6d7408f9dfd8b0fc79be157c05ad99335f2b47b78abb5d5b13ef639fa18/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:19 np0005545273 podman[76094]: 2025-12-04 10:14:19.786749893 +0000 UTC m=+0.100288258 container init 807e62daf08a5ec99e4b474393d9bd33558850f2cc7708bf56977bb6d406ca41 (image=quay.io/ceph/ceph:v20, name=magical_lovelace, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec  4 05:14:19 np0005545273 podman[76094]: 2025-12-04 10:14:19.799163047 +0000 UTC m=+0.112701392 container start 807e62daf08a5ec99e4b474393d9bd33558850f2cc7708bf56977bb6d406ca41 (image=quay.io/ceph/ceph:v20, name=magical_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:14:19 np0005545273 podman[76094]: 2025-12-04 10:14:19.802857082 +0000 UTC m=+0.116395507 container attach 807e62daf08a5ec99e4b474393d9bd33558850f2cc7708bf56977bb6d406ca41 (image=quay.io/ceph/ceph:v20, name=magical_lovelace, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:14:19 np0005545273 podman[76094]: 2025-12-04 10:14:19.71332156 +0000 UTC m=+0.026859935 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:19 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'balancer'
Dec  4 05:14:19 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'cephadm'
Dec  4 05:14:20 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec  4 05:14:20 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/449980278' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec  4 05:14:20 np0005545273 magical_lovelace[76130]: {
Dec  4 05:14:20 np0005545273 magical_lovelace[76130]:    "epoch": 5,
Dec  4 05:14:20 np0005545273 magical_lovelace[76130]:    "available": true,
Dec  4 05:14:20 np0005545273 magical_lovelace[76130]:    "active_name": "compute-0.iwufnj",
Dec  4 05:14:20 np0005545273 magical_lovelace[76130]:    "num_standby": 0
Dec  4 05:14:20 np0005545273 magical_lovelace[76130]: }
Dec  4 05:14:20 np0005545273 systemd[1]: libpod-807e62daf08a5ec99e4b474393d9bd33558850f2cc7708bf56977bb6d406ca41.scope: Deactivated successfully.
Dec  4 05:14:20 np0005545273 podman[76094]: 2025-12-04 10:14:20.271166218 +0000 UTC m=+0.584704613 container died 807e62daf08a5ec99e4b474393d9bd33558850f2cc7708bf56977bb6d406ca41 (image=quay.io/ceph/ceph:v20, name=magical_lovelace, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec  4 05:14:20 np0005545273 systemd[1]: var-lib-containers-storage-overlay-a7b3c6d7408f9dfd8b0fc79be157c05ad99335f2b47b78abb5d5b13ef639fa18-merged.mount: Deactivated successfully.
Dec  4 05:14:20 np0005545273 podman[76094]: 2025-12-04 10:14:20.312413061 +0000 UTC m=+0.625951406 container remove 807e62daf08a5ec99e4b474393d9bd33558850f2cc7708bf56977bb6d406ca41 (image=quay.io/ceph/ceph:v20, name=magical_lovelace, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec  4 05:14:20 np0005545273 systemd[1]: libpod-conmon-807e62daf08a5ec99e4b474393d9bd33558850f2cc7708bf56977bb6d406ca41.scope: Deactivated successfully.
Dec  4 05:14:20 np0005545273 podman[76174]: 2025-12-04 10:14:20.375864163 +0000 UTC m=+0.041831373 container create ee832cb9f6b2e77d374607311963c4d01897a634d3d9b844e691d03d519f7dc1 (image=quay.io/ceph/ceph:v20, name=distracted_booth, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:14:20 np0005545273 systemd[1]: Started libpod-conmon-ee832cb9f6b2e77d374607311963c4d01897a634d3d9b844e691d03d519f7dc1.scope.
Dec  4 05:14:20 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:20 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52d93b9e7ee3a3b58b518df108df5f75179c2bd53962eb9ccdbd1e2941340e04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:20 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52d93b9e7ee3a3b58b518df108df5f75179c2bd53962eb9ccdbd1e2941340e04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:20 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52d93b9e7ee3a3b58b518df108df5f75179c2bd53962eb9ccdbd1e2941340e04/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:20 np0005545273 podman[76174]: 2025-12-04 10:14:20.355299923 +0000 UTC m=+0.021267183 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:20 np0005545273 podman[76174]: 2025-12-04 10:14:20.460200293 +0000 UTC m=+0.126167533 container init ee832cb9f6b2e77d374607311963c4d01897a634d3d9b844e691d03d519f7dc1 (image=quay.io/ceph/ceph:v20, name=distracted_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:14:20 np0005545273 podman[76174]: 2025-12-04 10:14:20.467982673 +0000 UTC m=+0.133949893 container start ee832cb9f6b2e77d374607311963c4d01897a634d3d9b844e691d03d519f7dc1 (image=quay.io/ceph/ceph:v20, name=distracted_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  4 05:14:20 np0005545273 podman[76174]: 2025-12-04 10:14:20.471635789 +0000 UTC m=+0.137603039 container attach ee832cb9f6b2e77d374607311963c4d01897a634d3d9b844e691d03d519f7dc1 (image=quay.io/ceph/ceph:v20, name=distracted_booth, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:14:20 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/2477013307' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec  4 05:14:20 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'crash'
Dec  4 05:14:20 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'dashboard'
Dec  4 05:14:21 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'devicehealth'
Dec  4 05:14:21 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'diskprediction_local'
Dec  4 05:14:21 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  4 05:14:21 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  4 05:14:21 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]:  from numpy import show_config as show_numpy_config
Dec  4 05:14:21 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'influx'
Dec  4 05:14:21 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'insights'
Dec  4 05:14:21 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'iostat'
Dec  4 05:14:21 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'k8sevents'
Dec  4 05:14:22 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'localpool'
Dec  4 05:14:22 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'mds_autoscaler'
Dec  4 05:14:22 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'mirroring'
Dec  4 05:14:22 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'nfs'
Dec  4 05:14:22 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'orchestrator'
Dec  4 05:14:23 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'osd_perf_query'
Dec  4 05:14:23 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'osd_support'
Dec  4 05:14:23 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'pg_autoscaler'
Dec  4 05:14:23 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'progress'
Dec  4 05:14:23 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'prometheus'
Dec  4 05:14:23 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'rbd_support'
Dec  4 05:14:23 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'rgw'
Dec  4 05:14:24 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'rook'
Dec  4 05:14:24 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'selftest'
Dec  4 05:14:24 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'smb'
Dec  4 05:14:25 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'snap_schedule'
Dec  4 05:14:25 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'stats'
Dec  4 05:14:25 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'status'
Dec  4 05:14:25 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'telegraf'
Dec  4 05:14:25 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'telemetry'
Dec  4 05:14:25 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'test_orchestrator'
Dec  4 05:14:25 np0005545273 ceph-mgr[75651]: mgr[py] Loading python module 'volumes'
Dec  4 05:14:26 np0005545273 ceph-mon[75358]: log_channel(cluster) log [INF] : Active manager daemon compute-0.iwufnj restarted
Dec  4 05:14:26 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Dec  4 05:14:26 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  4 05:14:26 np0005545273 ceph-mon[75358]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.iwufnj
Dec  4 05:14:26 np0005545273 ceph-mgr[75651]: ms_deliver_dispatch: unhandled message 0x55fe4a77a000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec  4 05:14:26 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.2 inc ratio 0.4 full ratio 0.4
Dec  4 05:14:26 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec  4 05:14:26 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Dec  4 05:14:26 np0005545273 ceph-mgr[75651]: mgr handle_mgr_map Activating!
Dec  4 05:14:26 np0005545273 ceph-mgr[75651]: mgr handle_mgr_map I am now activating
Dec  4 05:14:26 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Dec  4 05:14:26 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.iwufnj(active, starting, since 0.542923s)
Dec  4 05:14:26 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  4 05:14:26 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec  4 05:14:26 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.iwufnj", "id": "compute-0.iwufnj"} v 0)
Dec  4 05:14:26 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "mgr metadata", "who": "compute-0.iwufnj", "id": "compute-0.iwufnj"} : dispatch
Dec  4 05:14:26 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec  4 05:14:26 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "mds metadata"} : dispatch
Dec  4 05:14:26 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).mds e1 all = 1
Dec  4 05:14:26 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec  4 05:14:26 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata"} : dispatch
Dec  4 05:14:26 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec  4 05:14:26 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "mon metadata"} : dispatch
Dec  4 05:14:26 np0005545273 ceph-mgr[75651]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  4 05:14:26 np0005545273 ceph-mgr[75651]: mgr load Constructed class from module: balancer
Dec  4 05:14:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Starting
Dec  4 05:14:26 np0005545273 ceph-mon[75358]: log_channel(cluster) log [INF] : Manager daemon compute-0.iwufnj is now available
Dec  4 05:14:26 np0005545273 ceph-mgr[75651]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  4 05:14:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:14:26
Dec  4 05:14:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:14:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:14:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] No pools available
Dec  4 05:14:26 np0005545273 ceph-mon[75358]: Active manager daemon compute-0.iwufnj restarted
Dec  4 05:14:26 np0005545273 ceph-mon[75358]: Activating manager daemon compute-0.iwufnj
Dec  4 05:14:26 np0005545273 ceph-mon[75358]: Manager daemon compute-0.iwufnj is now available
Dec  4 05:14:27 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.cephadm_root_ca_cert}] v 0)
Dec  4 05:14:27 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:27 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.cephadm_root_ca_key}] v 0)
Dec  4 05:14:27 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.iwufnj(active, since 1.71936s)
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Dec  4 05:14:27 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Dec  4 05:14:27 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Dec  4 05:14:27 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec  4 05:14:27 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Dec  4 05:14:27 np0005545273 distracted_booth[76195]: {
Dec  4 05:14:27 np0005545273 distracted_booth[76195]:    "mgrmap_epoch": 7,
Dec  4 05:14:27 np0005545273 distracted_booth[76195]:    "initialized": true
Dec  4 05:14:27 np0005545273 distracted_booth[76195]: }
Dec  4 05:14:27 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: mgr load Constructed class from module: cephadm
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: mgr load Constructed class from module: crash
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: mgr load Constructed class from module: devicehealth
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: [devicehealth INFO root] Starting
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: mgr load Constructed class from module: iostat
Dec  4 05:14:27 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  4 05:14:27 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: mgr load Constructed class from module: nfs
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: mgr load Constructed class from module: orchestrator
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: mgr load Constructed class from module: pg_autoscaler
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: mgr load Constructed class from module: progress
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: [progress INFO root] Loading...
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: [progress INFO root] No stored events to load
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: [progress INFO root] Loaded [] historic events
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: [progress INFO root] Loaded OSDMap, ready.
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  4 05:14:27 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  4 05:14:27 np0005545273 systemd[1]: libpod-ee832cb9f6b2e77d374607311963c4d01897a634d3d9b844e691d03d519f7dc1.scope: Deactivated successfully.
Dec  4 05:14:27 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec  4 05:14:27 np0005545273 podman[76174]: 2025-12-04 10:14:27.87285493 +0000 UTC m=+7.538822230 container died ee832cb9f6b2e77d374607311963c4d01897a634d3d9b844e691d03d519f7dc1 (image=quay.io/ceph/ceph:v20, name=distracted_booth, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] recovery thread starting
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] starting setup
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: mgr load Constructed class from module: rbd_support
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: mgr load Constructed class from module: status
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: mgr load Constructed class from module: telemetry
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  4 05:14:27 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.iwufnj/mirror_snapshot_schedule"} v 0)
Dec  4 05:14:27 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.iwufnj/mirror_snapshot_schedule"} : dispatch
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] PerfHandler: starting
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TaskHandler: starting
Dec  4 05:14:27 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.iwufnj/trash_purge_schedule"} v 0)
Dec  4 05:14:27 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.iwufnj/trash_purge_schedule"} : dispatch
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] setup complete
Dec  4 05:14:27 np0005545273 ceph-mgr[75651]: mgr load Constructed class from module: volumes
Dec  4 05:14:27 np0005545273 systemd[1]: var-lib-containers-storage-overlay-52d93b9e7ee3a3b58b518df108df5f75179c2bd53962eb9ccdbd1e2941340e04-merged.mount: Deactivated successfully.
Dec  4 05:14:27 np0005545273 podman[76174]: 2025-12-04 10:14:27.929295596 +0000 UTC m=+7.595262856 container remove ee832cb9f6b2e77d374607311963c4d01897a634d3d9b844e691d03d519f7dc1 (image=quay.io/ceph/ceph:v20, name=distracted_booth, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:14:27 np0005545273 systemd[1]: libpod-conmon-ee832cb9f6b2e77d374607311963c4d01897a634d3d9b844e691d03d519f7dc1.scope: Deactivated successfully.
Dec  4 05:14:28 np0005545273 podman[76344]: 2025-12-04 10:14:28.00447392 +0000 UTC m=+0.047688940 container create 0aeddbbe959a91e56f512a70bae5d10d4afb00055c6814e64a86b0f12941397b (image=quay.io/ceph/ceph:v20, name=unruffled_poitras, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec  4 05:14:28 np0005545273 systemd[1]: Started libpod-conmon-0aeddbbe959a91e56f512a70bae5d10d4afb00055c6814e64a86b0f12941397b.scope.
Dec  4 05:14:28 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:28 np0005545273 podman[76344]: 2025-12-04 10:14:27.98613116 +0000 UTC m=+0.029346210 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:28 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee6e1f04e3d542322020ff86477fd667b517999d5e6cf048aedf003700840bea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:28 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee6e1f04e3d542322020ff86477fd667b517999d5e6cf048aedf003700840bea/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:28 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee6e1f04e3d542322020ff86477fd667b517999d5e6cf048aedf003700840bea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:28 np0005545273 podman[76344]: 2025-12-04 10:14:28.092301713 +0000 UTC m=+0.135516753 container init 0aeddbbe959a91e56f512a70bae5d10d4afb00055c6814e64a86b0f12941397b (image=quay.io/ceph/ceph:v20, name=unruffled_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec  4 05:14:28 np0005545273 podman[76344]: 2025-12-04 10:14:28.097647909 +0000 UTC m=+0.140862929 container start 0aeddbbe959a91e56f512a70bae5d10d4afb00055c6814e64a86b0f12941397b (image=quay.io/ceph/ceph:v20, name=unruffled_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:14:28 np0005545273 podman[76344]: 2025-12-04 10:14:28.101157352 +0000 UTC m=+0.144372382 container attach 0aeddbbe959a91e56f512a70bae5d10d4afb00055c6814e64a86b0f12941397b (image=quay.io/ceph/ceph:v20, name=unruffled_poitras, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  4 05:14:28 np0005545273 ceph-mgr[75651]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  4 05:14:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "orchestrator"} v 0)
Dec  4 05:14:28 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2352588894' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Dec  4 05:14:28 np0005545273 ceph-mgr[75651]: [cephadm INFO cherrypy.error] [04/Dec/2025:10:14:28] ENGINE Bus STARTING
Dec  4 05:14:28 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : [04/Dec/2025:10:14:28] ENGINE Bus STARTING
Dec  4 05:14:28 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:28 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:28 np0005545273 ceph-mon[75358]: Found migration_current of "None". Setting to last migration.
Dec  4 05:14:28 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:28 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:28 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.iwufnj/mirror_snapshot_schedule"} : dispatch
Dec  4 05:14:28 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.iwufnj/trash_purge_schedule"} : dispatch
Dec  4 05:14:28 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/2352588894' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Dec  4 05:14:28 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2352588894' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Dec  4 05:14:28 np0005545273 unruffled_poitras[76360]: module 'orchestrator' is already enabled (always-on)
Dec  4 05:14:28 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.iwufnj(active, since 2s)
Dec  4 05:14:28 np0005545273 systemd[1]: libpod-0aeddbbe959a91e56f512a70bae5d10d4afb00055c6814e64a86b0f12941397b.scope: Deactivated successfully.
Dec  4 05:14:28 np0005545273 podman[76344]: 2025-12-04 10:14:28.878435342 +0000 UTC m=+0.921650362 container died 0aeddbbe959a91e56f512a70bae5d10d4afb00055c6814e64a86b0f12941397b (image=quay.io/ceph/ceph:v20, name=unruffled_poitras, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec  4 05:14:28 np0005545273 systemd[1]: var-lib-containers-storage-overlay-ee6e1f04e3d542322020ff86477fd667b517999d5e6cf048aedf003700840bea-merged.mount: Deactivated successfully.
Dec  4 05:14:28 np0005545273 ceph-mgr[75651]: [cephadm INFO cherrypy.error] [04/Dec/2025:10:14:28] ENGINE Serving on https://192.168.122.100:7150
Dec  4 05:14:28 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : [04/Dec/2025:10:14:28] ENGINE Serving on https://192.168.122.100:7150
Dec  4 05:14:28 np0005545273 podman[76344]: 2025-12-04 10:14:28.924715976 +0000 UTC m=+0.967930996 container remove 0aeddbbe959a91e56f512a70bae5d10d4afb00055c6814e64a86b0f12941397b (image=quay.io/ceph/ceph:v20, name=unruffled_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  4 05:14:28 np0005545273 ceph-mgr[75651]: [cephadm INFO cherrypy.error] [04/Dec/2025:10:14:28] ENGINE Client ('192.168.122.100', 48252) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  4 05:14:28 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : [04/Dec/2025:10:14:28] ENGINE Client ('192.168.122.100', 48252) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  4 05:14:28 np0005545273 systemd[1]: libpod-conmon-0aeddbbe959a91e56f512a70bae5d10d4afb00055c6814e64a86b0f12941397b.scope: Deactivated successfully.
Dec  4 05:14:29 np0005545273 podman[76420]: 2025-12-04 10:14:29.010726035 +0000 UTC m=+0.056102151 container create 09f434909d5050b1368eb63d6c44bec227006b4349ab558ed7bcd602c53d9d10 (image=quay.io/ceph/ceph:v20, name=affectionate_yonath, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:14:29 np0005545273 ceph-mgr[75651]: [cephadm INFO cherrypy.error] [04/Dec/2025:10:14:29] ENGINE Serving on http://192.168.122.100:8765
Dec  4 05:14:29 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : [04/Dec/2025:10:14:29] ENGINE Serving on http://192.168.122.100:8765
Dec  4 05:14:29 np0005545273 ceph-mgr[75651]: [cephadm INFO cherrypy.error] [04/Dec/2025:10:14:29] ENGINE Bus STARTED
Dec  4 05:14:29 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : [04/Dec/2025:10:14:29] ENGINE Bus STARTED
Dec  4 05:14:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  4 05:14:29 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec  4 05:14:29 np0005545273 systemd[1]: Started libpod-conmon-09f434909d5050b1368eb63d6c44bec227006b4349ab558ed7bcd602c53d9d10.scope.
Dec  4 05:14:29 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:29 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc91711e8e0020140a078fb0da0d6f753c79479036aebaf106ac9c54c1795402/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:29 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc91711e8e0020140a078fb0da0d6f753c79479036aebaf106ac9c54c1795402/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:29 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc91711e8e0020140a078fb0da0d6f753c79479036aebaf106ac9c54c1795402/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:29 np0005545273 podman[76420]: 2025-12-04 10:14:28.989205667 +0000 UTC m=+0.034581763 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:29 np0005545273 podman[76420]: 2025-12-04 10:14:29.097128361 +0000 UTC m=+0.142504457 container init 09f434909d5050b1368eb63d6c44bec227006b4349ab558ed7bcd602c53d9d10 (image=quay.io/ceph/ceph:v20, name=affectionate_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:14:29 np0005545273 podman[76420]: 2025-12-04 10:14:29.102597229 +0000 UTC m=+0.147973335 container start 09f434909d5050b1368eb63d6c44bec227006b4349ab558ed7bcd602c53d9d10 (image=quay.io/ceph/ceph:v20, name=affectionate_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:14:29 np0005545273 podman[76420]: 2025-12-04 10:14:29.107421787 +0000 UTC m=+0.152797893 container attach 09f434909d5050b1368eb63d6c44bec227006b4349ab558ed7bcd602c53d9d10 (image=quay.io/ceph/ceph:v20, name=affectionate_yonath, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:14:29 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 05:14:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Dec  4 05:14:29 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  4 05:14:29 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec  4 05:14:29 np0005545273 systemd[1]: libpod-09f434909d5050b1368eb63d6c44bec227006b4349ab558ed7bcd602c53d9d10.scope: Deactivated successfully.
Dec  4 05:14:29 np0005545273 podman[76420]: 2025-12-04 10:14:29.550251553 +0000 UTC m=+0.595627659 container died 09f434909d5050b1368eb63d6c44bec227006b4349ab558ed7bcd602c53d9d10 (image=quay.io/ceph/ceph:v20, name=affectionate_yonath, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:14:29 np0005545273 systemd[1]: var-lib-containers-storage-overlay-bc91711e8e0020140a078fb0da0d6f753c79479036aebaf106ac9c54c1795402-merged.mount: Deactivated successfully.
Dec  4 05:14:29 np0005545273 podman[76420]: 2025-12-04 10:14:29.589759304 +0000 UTC m=+0.635135380 container remove 09f434909d5050b1368eb63d6c44bec227006b4349ab558ed7bcd602c53d9d10 (image=quay.io/ceph/ceph:v20, name=affectionate_yonath, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:14:29 np0005545273 systemd[1]: libpod-conmon-09f434909d5050b1368eb63d6c44bec227006b4349ab558ed7bcd602c53d9d10.scope: Deactivated successfully.
Dec  4 05:14:29 np0005545273 podman[76473]: 2025-12-04 10:14:29.642901202 +0000 UTC m=+0.037413526 container create 8db5cf60d872f074d2f1f4f58d47dd302b2e0a52c07449760166e0722a75b8f5 (image=quay.io/ceph/ceph:v20, name=heuristic_benz, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:14:29 np0005545273 systemd[1]: Started libpod-conmon-8db5cf60d872f074d2f1f4f58d47dd302b2e0a52c07449760166e0722a75b8f5.scope.
Dec  4 05:14:29 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:29 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4507dcd3d3055aed2c5e42582ff4b72cb468e47f7f44a57b5e3cb637d596bf8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:29 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4507dcd3d3055aed2c5e42582ff4b72cb468e47f7f44a57b5e3cb637d596bf8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:29 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4507dcd3d3055aed2c5e42582ff4b72cb468e47f7f44a57b5e3cb637d596bf8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:29 np0005545273 podman[76473]: 2025-12-04 10:14:29.625803354 +0000 UTC m=+0.020315708 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:29 np0005545273 podman[76473]: 2025-12-04 10:14:29.730701873 +0000 UTC m=+0.125214217 container init 8db5cf60d872f074d2f1f4f58d47dd302b2e0a52c07449760166e0722a75b8f5 (image=quay.io/ceph/ceph:v20, name=heuristic_benz, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  4 05:14:29 np0005545273 podman[76473]: 2025-12-04 10:14:29.740322237 +0000 UTC m=+0.134834581 container start 8db5cf60d872f074d2f1f4f58d47dd302b2e0a52c07449760166e0722a75b8f5 (image=quay.io/ceph/ceph:v20, name=heuristic_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:14:29 np0005545273 podman[76473]: 2025-12-04 10:14:29.744035233 +0000 UTC m=+0.138547577 container attach 8db5cf60d872f074d2f1f4f58d47dd302b2e0a52c07449760166e0722a75b8f5 (image=quay.io/ceph/ceph:v20, name=heuristic_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:14:29 np0005545273 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  4 05:14:29 np0005545273 ceph-mon[75358]: [04/Dec/2025:10:14:28] ENGINE Bus STARTING
Dec  4 05:14:29 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/2352588894' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Dec  4 05:14:29 np0005545273 ceph-mon[75358]: [04/Dec/2025:10:14:28] ENGINE Serving on https://192.168.122.100:7150
Dec  4 05:14:29 np0005545273 ceph-mon[75358]: [04/Dec/2025:10:14:28] ENGINE Client ('192.168.122.100', 48252) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  4 05:14:29 np0005545273 ceph-mon[75358]: [04/Dec/2025:10:14:29] ENGINE Serving on http://192.168.122.100:8765
Dec  4 05:14:29 np0005545273 ceph-mon[75358]: [04/Dec/2025:10:14:29] ENGINE Bus STARTED
Dec  4 05:14:29 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:30 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 05:14:30 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Dec  4 05:14:30 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:30 np0005545273 ceph-mgr[75651]: [cephadm INFO root] Set ssh ssh_user
Dec  4 05:14:30 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Dec  4 05:14:30 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Dec  4 05:14:30 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:30 np0005545273 ceph-mgr[75651]: [cephadm INFO root] Set ssh ssh_config
Dec  4 05:14:30 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Dec  4 05:14:30 np0005545273 ceph-mgr[75651]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Dec  4 05:14:30 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Dec  4 05:14:30 np0005545273 heuristic_benz[76489]: ssh user set to ceph-admin. sudo will be used
Dec  4 05:14:30 np0005545273 systemd[1]: libpod-8db5cf60d872f074d2f1f4f58d47dd302b2e0a52c07449760166e0722a75b8f5.scope: Deactivated successfully.
Dec  4 05:14:30 np0005545273 podman[76473]: 2025-12-04 10:14:30.179079719 +0000 UTC m=+0.573592043 container died 8db5cf60d872f074d2f1f4f58d47dd302b2e0a52c07449760166e0722a75b8f5 (image=quay.io/ceph/ceph:v20, name=heuristic_benz, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:14:30 np0005545273 systemd[1]: var-lib-containers-storage-overlay-d4507dcd3d3055aed2c5e42582ff4b72cb468e47f7f44a57b5e3cb637d596bf8-merged.mount: Deactivated successfully.
Dec  4 05:14:30 np0005545273 podman[76473]: 2025-12-04 10:14:30.220920663 +0000 UTC m=+0.615433027 container remove 8db5cf60d872f074d2f1f4f58d47dd302b2e0a52c07449760166e0722a75b8f5 (image=quay.io/ceph/ceph:v20, name=heuristic_benz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:14:30 np0005545273 systemd[1]: libpod-conmon-8db5cf60d872f074d2f1f4f58d47dd302b2e0a52c07449760166e0722a75b8f5.scope: Deactivated successfully.
Dec  4 05:14:30 np0005545273 podman[76528]: 2025-12-04 10:14:30.284400516 +0000 UTC m=+0.043639627 container create 210a1fab3180156264d71d1f50d23a6f7335de73fec5ab4f555eafb92301a2ef (image=quay.io/ceph/ceph:v20, name=jolly_varahamihira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec  4 05:14:30 np0005545273 systemd[1]: Started libpod-conmon-210a1fab3180156264d71d1f50d23a6f7335de73fec5ab4f555eafb92301a2ef.scope.
Dec  4 05:14:30 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:30 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd0dee3b488710a53ad683a637ca5d61a7152486d55922c6222bf070ffd171c1/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:30 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd0dee3b488710a53ad683a637ca5d61a7152486d55922c6222bf070ffd171c1/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:30 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd0dee3b488710a53ad683a637ca5d61a7152486d55922c6222bf070ffd171c1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:30 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd0dee3b488710a53ad683a637ca5d61a7152486d55922c6222bf070ffd171c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:30 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd0dee3b488710a53ad683a637ca5d61a7152486d55922c6222bf070ffd171c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:30 np0005545273 podman[76528]: 2025-12-04 10:14:30.358002722 +0000 UTC m=+0.117241863 container init 210a1fab3180156264d71d1f50d23a6f7335de73fec5ab4f555eafb92301a2ef (image=quay.io/ceph/ceph:v20, name=jolly_varahamihira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  4 05:14:30 np0005545273 podman[76528]: 2025-12-04 10:14:30.262526363 +0000 UTC m=+0.021765494 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:30 np0005545273 podman[76528]: 2025-12-04 10:14:30.368570452 +0000 UTC m=+0.127809563 container start 210a1fab3180156264d71d1f50d23a6f7335de73fec5ab4f555eafb92301a2ef (image=quay.io/ceph/ceph:v20, name=jolly_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:14:30 np0005545273 podman[76528]: 2025-12-04 10:14:30.373092334 +0000 UTC m=+0.132331475 container attach 210a1fab3180156264d71d1f50d23a6f7335de73fec5ab4f555eafb92301a2ef (image=quay.io/ceph/ceph:v20, name=jolly_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  4 05:14:30 np0005545273 ceph-mgr[75651]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  4 05:14:30 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 05:14:30 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Dec  4 05:14:30 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:30 np0005545273 ceph-mgr[75651]: [cephadm INFO root] Set ssh ssh_identity_key
Dec  4 05:14:30 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Dec  4 05:14:30 np0005545273 ceph-mgr[75651]: [cephadm INFO root] Set ssh private key
Dec  4 05:14:30 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Set ssh private key
Dec  4 05:14:30 np0005545273 systemd[1]: libpod-210a1fab3180156264d71d1f50d23a6f7335de73fec5ab4f555eafb92301a2ef.scope: Deactivated successfully.
Dec  4 05:14:30 np0005545273 podman[76528]: 2025-12-04 10:14:30.797091871 +0000 UTC m=+0.556331022 container died 210a1fab3180156264d71d1f50d23a6f7335de73fec5ab4f555eafb92301a2ef (image=quay.io/ceph/ceph:v20, name=jolly_varahamihira, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec  4 05:14:30 np0005545273 systemd[1]: var-lib-containers-storage-overlay-cd0dee3b488710a53ad683a637ca5d61a7152486d55922c6222bf070ffd171c1-merged.mount: Deactivated successfully.
Dec  4 05:14:30 np0005545273 podman[76528]: 2025-12-04 10:14:30.845044715 +0000 UTC m=+0.604283826 container remove 210a1fab3180156264d71d1f50d23a6f7335de73fec5ab4f555eafb92301a2ef (image=quay.io/ceph/ceph:v20, name=jolly_varahamihira, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  4 05:14:30 np0005545273 systemd[1]: libpod-conmon-210a1fab3180156264d71d1f50d23a6f7335de73fec5ab4f555eafb92301a2ef.scope: Deactivated successfully.
Dec  4 05:14:30 np0005545273 podman[76583]: 2025-12-04 10:14:30.906032233 +0000 UTC m=+0.043684418 container create 6008233402b8387d542828a113cc20ccca27b5ef4fb75471ea5c5d8119098b9f (image=quay.io/ceph/ceph:v20, name=crazy_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:14:30 np0005545273 systemd[1]: Started libpod-conmon-6008233402b8387d542828a113cc20ccca27b5ef4fb75471ea5c5d8119098b9f.scope.
Dec  4 05:14:30 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:30 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06ecb12cf6c5bac06c422a271b82f06256b2829de0304602e0d7238a1751182e/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:30 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06ecb12cf6c5bac06c422a271b82f06256b2829de0304602e0d7238a1751182e/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:30 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06ecb12cf6c5bac06c422a271b82f06256b2829de0304602e0d7238a1751182e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:30 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06ecb12cf6c5bac06c422a271b82f06256b2829de0304602e0d7238a1751182e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:30 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06ecb12cf6c5bac06c422a271b82f06256b2829de0304602e0d7238a1751182e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:30 np0005545273 podman[76583]: 2025-12-04 10:14:30.883877395 +0000 UTC m=+0.021529520 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:30 np0005545273 podman[76583]: 2025-12-04 10:14:30.983499759 +0000 UTC m=+0.121151894 container init 6008233402b8387d542828a113cc20ccca27b5ef4fb75471ea5c5d8119098b9f (image=quay.io/ceph/ceph:v20, name=crazy_fermi, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:14:30 np0005545273 podman[76583]: 2025-12-04 10:14:30.996847899 +0000 UTC m=+0.134500034 container start 6008233402b8387d542828a113cc20ccca27b5ef4fb75471ea5c5d8119098b9f (image=quay.io/ceph/ceph:v20, name=crazy_fermi, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  4 05:14:31 np0005545273 podman[76583]: 2025-12-04 10:14:31.001185407 +0000 UTC m=+0.138837552 container attach 6008233402b8387d542828a113cc20ccca27b5ef4fb75471ea5c5d8119098b9f (image=quay.io/ceph/ceph:v20, name=crazy_fermi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec  4 05:14:31 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:31 np0005545273 ceph-mon[75358]: Set ssh ssh_user
Dec  4 05:14:31 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:31 np0005545273 ceph-mon[75358]: Set ssh ssh_config
Dec  4 05:14:31 np0005545273 ceph-mon[75358]: ssh user set to ceph-admin. sudo will be used
Dec  4 05:14:31 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:31 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 05:14:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Dec  4 05:14:31 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:31 np0005545273 ceph-mgr[75651]: [cephadm INFO root] Set ssh ssh_identity_pub
Dec  4 05:14:31 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Dec  4 05:14:31 np0005545273 systemd[1]: libpod-6008233402b8387d542828a113cc20ccca27b5ef4fb75471ea5c5d8119098b9f.scope: Deactivated successfully.
Dec  4 05:14:31 np0005545273 conmon[76599]: conmon 6008233402b8387d5428 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6008233402b8387d542828a113cc20ccca27b5ef4fb75471ea5c5d8119098b9f.scope/container/memory.events
Dec  4 05:14:31 np0005545273 podman[76583]: 2025-12-04 10:14:31.492554468 +0000 UTC m=+0.630206613 container died 6008233402b8387d542828a113cc20ccca27b5ef4fb75471ea5c5d8119098b9f (image=quay.io/ceph/ceph:v20, name=crazy_fermi, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  4 05:14:31 np0005545273 systemd[1]: var-lib-containers-storage-overlay-06ecb12cf6c5bac06c422a271b82f06256b2829de0304602e0d7238a1751182e-merged.mount: Deactivated successfully.
Dec  4 05:14:31 np0005545273 podman[76583]: 2025-12-04 10:14:31.534509753 +0000 UTC m=+0.672161868 container remove 6008233402b8387d542828a113cc20ccca27b5ef4fb75471ea5c5d8119098b9f (image=quay.io/ceph/ceph:v20, name=crazy_fermi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  4 05:14:31 np0005545273 systemd[1]: libpod-conmon-6008233402b8387d542828a113cc20ccca27b5ef4fb75471ea5c5d8119098b9f.scope: Deactivated successfully.
Dec  4 05:14:31 np0005545273 podman[76637]: 2025-12-04 10:14:31.605325259 +0000 UTC m=+0.046726573 container create ca68106908efe2e232292cee4988d8a72e6c0df997712a7ab551816bf81ff3d7 (image=quay.io/ceph/ceph:v20, name=practical_snyder, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:14:31 np0005545273 systemd[1]: Started libpod-conmon-ca68106908efe2e232292cee4988d8a72e6c0df997712a7ab551816bf81ff3d7.scope.
Dec  4 05:14:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019893108 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:14:31 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:31 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bbd249adb0183874f83aa2c3b956a5fa1302149df10cbf09838993242aad2c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:31 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bbd249adb0183874f83aa2c3b956a5fa1302149df10cbf09838993242aad2c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:31 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bbd249adb0183874f83aa2c3b956a5fa1302149df10cbf09838993242aad2c8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:31 np0005545273 podman[76637]: 2025-12-04 10:14:31.589211438 +0000 UTC m=+0.030612782 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:31 np0005545273 podman[76637]: 2025-12-04 10:14:31.684783741 +0000 UTC m=+0.126185075 container init ca68106908efe2e232292cee4988d8a72e6c0df997712a7ab551816bf81ff3d7 (image=quay.io/ceph/ceph:v20, name=practical_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:14:31 np0005545273 podman[76637]: 2025-12-04 10:14:31.693741202 +0000 UTC m=+0.135142526 container start ca68106908efe2e232292cee4988d8a72e6c0df997712a7ab551816bf81ff3d7 (image=quay.io/ceph/ceph:v20, name=practical_snyder, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:14:31 np0005545273 podman[76637]: 2025-12-04 10:14:31.701144785 +0000 UTC m=+0.142546139 container attach ca68106908efe2e232292cee4988d8a72e6c0df997712a7ab551816bf81ff3d7 (image=quay.io/ceph/ceph:v20, name=practical_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:14:31 np0005545273 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  4 05:14:32 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 05:14:32 np0005545273 practical_snyder[76654]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCYFo9w32Y96e7dxBpF4AJjs+qlgFcxOYjX05DtMUxNXFtUOf3ObodRfd0pD647pbzXkmOGGdM76hP956QwlOFiOK0OvMcbb/tkvWQawR2eOE+jl9eDC5G3Ok7ABwMpCNVqOQq/RQihqOVMXikT3NjUSrY34kxFvZm15o3mQlZu6Or1dZh+cXdm8+++GhM5tGjgfOuaOyJP0/didvf8CNuryXN/iH03ct33wRVlDtnIL1xqkpOhCnnjSFrcNhwudKQrA+yKZ00BHF0ZiiR43oxJRZH7yT847dgxrxBfPfD9zXof9tRuweMdgN0o75/kcjbJVkzsunOsBVRzOAp1R5h7qs0Ik1P/QwZczTZvyrlHW9ypgSZZbKqxGsyrhwz0UpVsMo2JGLWrs43tmKC6U9Rsm38X231jzwX8ii2XKVm4jnZleR5zK+KPesG8eYwgE4iVz4npBCt01eglKX96cA5jOURbqXiydJl1JXkbg+IggecbDre8NW3PfmL0hy9faQ8= zuul@controller
Dec  4 05:14:32 np0005545273 systemd[1]: libpod-ca68106908efe2e232292cee4988d8a72e6c0df997712a7ab551816bf81ff3d7.scope: Deactivated successfully.
Dec  4 05:14:32 np0005545273 podman[76680]: 2025-12-04 10:14:32.22331471 +0000 UTC m=+0.048281280 container died ca68106908efe2e232292cee4988d8a72e6c0df997712a7ab551816bf81ff3d7 (image=quay.io/ceph/ceph:v20, name=practical_snyder, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:14:32 np0005545273 systemd[1]: var-lib-containers-storage-overlay-9bbd249adb0183874f83aa2c3b956a5fa1302149df10cbf09838993242aad2c8-merged.mount: Deactivated successfully.
Dec  4 05:14:32 np0005545273 podman[76680]: 2025-12-04 10:14:32.262806081 +0000 UTC m=+0.087772571 container remove ca68106908efe2e232292cee4988d8a72e6c0df997712a7ab551816bf81ff3d7 (image=quay.io/ceph/ceph:v20, name=practical_snyder, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec  4 05:14:32 np0005545273 systemd[1]: libpod-conmon-ca68106908efe2e232292cee4988d8a72e6c0df997712a7ab551816bf81ff3d7.scope: Deactivated successfully.
Dec  4 05:14:32 np0005545273 podman[76695]: 2025-12-04 10:14:32.33993782 +0000 UTC m=+0.047303642 container create 20292e06a9163d8bc0c1008812b3dc58d75698cb942ba915d104e77fba5300f3 (image=quay.io/ceph/ceph:v20, name=blissful_lamarr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:14:32 np0005545273 systemd[1]: Started libpod-conmon-20292e06a9163d8bc0c1008812b3dc58d75698cb942ba915d104e77fba5300f3.scope.
Dec  4 05:14:32 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:32 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cbbf0f7d20f1bfd0432ae6d9f0f94fc52bf857134aec9b83d7f1dd123c316ad/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:32 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cbbf0f7d20f1bfd0432ae6d9f0f94fc52bf857134aec9b83d7f1dd123c316ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:32 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cbbf0f7d20f1bfd0432ae6d9f0f94fc52bf857134aec9b83d7f1dd123c316ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:32 np0005545273 podman[76695]: 2025-12-04 10:14:32.319548814 +0000 UTC m=+0.026914656 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:32 np0005545273 podman[76695]: 2025-12-04 10:14:32.423995124 +0000 UTC m=+0.131360956 container init 20292e06a9163d8bc0c1008812b3dc58d75698cb942ba915d104e77fba5300f3 (image=quay.io/ceph/ceph:v20, name=blissful_lamarr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec  4 05:14:32 np0005545273 podman[76695]: 2025-12-04 10:14:32.43815562 +0000 UTC m=+0.145521432 container start 20292e06a9163d8bc0c1008812b3dc58d75698cb942ba915d104e77fba5300f3 (image=quay.io/ceph/ceph:v20, name=blissful_lamarr, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  4 05:14:32 np0005545273 podman[76695]: 2025-12-04 10:14:32.442124281 +0000 UTC m=+0.149490093 container attach 20292e06a9163d8bc0c1008812b3dc58d75698cb942ba915d104e77fba5300f3 (image=quay.io/ceph/ceph:v20, name=blissful_lamarr, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec  4 05:14:32 np0005545273 ceph-mon[75358]: Set ssh ssh_identity_key
Dec  4 05:14:32 np0005545273 ceph-mon[75358]: Set ssh private key
Dec  4 05:14:32 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:32 np0005545273 ceph-mon[75358]: Set ssh ssh_identity_pub
Dec  4 05:14:32 np0005545273 ceph-mgr[75651]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  4 05:14:32 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 05:14:33 np0005545273 systemd-logind[798]: New session 21 of user ceph-admin.
Dec  4 05:14:33 np0005545273 systemd[1]: Created slice User Slice of UID 42477.
Dec  4 05:14:33 np0005545273 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec  4 05:14:33 np0005545273 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec  4 05:14:33 np0005545273 systemd[1]: Starting User Manager for UID 42477...
Dec  4 05:14:33 np0005545273 systemd[76741]: Queued start job for default target Main User Target.
Dec  4 05:14:33 np0005545273 systemd-logind[798]: New session 23 of user ceph-admin.
Dec  4 05:14:33 np0005545273 systemd[76741]: Created slice User Application Slice.
Dec  4 05:14:33 np0005545273 systemd[76741]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  4 05:14:33 np0005545273 systemd[76741]: Started Daily Cleanup of User's Temporary Directories.
Dec  4 05:14:33 np0005545273 systemd[76741]: Reached target Paths.
Dec  4 05:14:33 np0005545273 systemd[76741]: Reached target Timers.
Dec  4 05:14:33 np0005545273 systemd[76741]: Starting D-Bus User Message Bus Socket...
Dec  4 05:14:33 np0005545273 systemd[76741]: Starting Create User's Volatile Files and Directories...
Dec  4 05:14:33 np0005545273 systemd[76741]: Listening on D-Bus User Message Bus Socket.
Dec  4 05:14:33 np0005545273 systemd[76741]: Reached target Sockets.
Dec  4 05:14:33 np0005545273 systemd[76741]: Finished Create User's Volatile Files and Directories.
Dec  4 05:14:33 np0005545273 systemd[76741]: Reached target Basic System.
Dec  4 05:14:33 np0005545273 systemd[76741]: Reached target Main User Target.
Dec  4 05:14:33 np0005545273 systemd[76741]: Startup finished in 148ms.
Dec  4 05:14:33 np0005545273 systemd[1]: Started User Manager for UID 42477.
Dec  4 05:14:33 np0005545273 systemd[1]: Started Session 21 of User ceph-admin.
Dec  4 05:14:33 np0005545273 systemd[1]: Started Session 23 of User ceph-admin.
Dec  4 05:14:33 np0005545273 systemd-logind[798]: New session 24 of user ceph-admin.
Dec  4 05:14:33 np0005545273 systemd[1]: Started Session 24 of User ceph-admin.
Dec  4 05:14:33 np0005545273 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  4 05:14:34 np0005545273 systemd-logind[798]: New session 25 of user ceph-admin.
Dec  4 05:14:34 np0005545273 systemd[1]: Started Session 25 of User ceph-admin.
Dec  4 05:14:34 np0005545273 ceph-mgr[75651]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Dec  4 05:14:34 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Dec  4 05:14:34 np0005545273 systemd-logind[798]: New session 26 of user ceph-admin.
Dec  4 05:14:34 np0005545273 systemd[1]: Started Session 26 of User ceph-admin.
Dec  4 05:14:34 np0005545273 ceph-mgr[75651]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  4 05:14:34 np0005545273 systemd-logind[798]: New session 27 of user ceph-admin.
Dec  4 05:14:34 np0005545273 systemd[1]: Started Session 27 of User ceph-admin.
Dec  4 05:14:35 np0005545273 systemd-logind[798]: New session 28 of user ceph-admin.
Dec  4 05:14:35 np0005545273 systemd[1]: Started Session 28 of User ceph-admin.
Dec  4 05:14:35 np0005545273 systemd-logind[798]: New session 29 of user ceph-admin.
Dec  4 05:14:35 np0005545273 systemd[1]: Started Session 29 of User ceph-admin.
Dec  4 05:14:35 np0005545273 ceph-mon[75358]: Deploying cephadm binary to compute-0
Dec  4 05:14:35 np0005545273 systemd-logind[798]: New session 30 of user ceph-admin.
Dec  4 05:14:35 np0005545273 systemd[1]: Started Session 30 of User ceph-admin.
Dec  4 05:14:35 np0005545273 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  4 05:14:36 np0005545273 systemd-logind[798]: New session 31 of user ceph-admin.
Dec  4 05:14:36 np0005545273 systemd[1]: Started Session 31 of User ceph-admin.
Dec  4 05:14:36 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052456 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:14:36 np0005545273 ceph-mgr[75651]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  4 05:14:37 np0005545273 systemd-logind[798]: New session 32 of user ceph-admin.
Dec  4 05:14:37 np0005545273 systemd[1]: Started Session 32 of User ceph-admin.
Dec  4 05:14:37 np0005545273 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  4 05:14:37 np0005545273 systemd-logind[798]: New session 33 of user ceph-admin.
Dec  4 05:14:37 np0005545273 systemd[1]: Started Session 33 of User ceph-admin.
Dec  4 05:14:38 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  4 05:14:38 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:38 np0005545273 ceph-mgr[75651]: [cephadm INFO root] Added host compute-0
Dec  4 05:14:38 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Added host compute-0
Dec  4 05:14:38 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  4 05:14:38 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec  4 05:14:38 np0005545273 blissful_lamarr[76711]: Added host 'compute-0' with addr '192.168.122.100'
Dec  4 05:14:38 np0005545273 systemd[1]: libpod-20292e06a9163d8bc0c1008812b3dc58d75698cb942ba915d104e77fba5300f3.scope: Deactivated successfully.
Dec  4 05:14:38 np0005545273 podman[76695]: 2025-12-04 10:14:38.403999557 +0000 UTC m=+6.111365379 container died 20292e06a9163d8bc0c1008812b3dc58d75698cb942ba915d104e77fba5300f3 (image=quay.io/ceph/ceph:v20, name=blissful_lamarr, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec  4 05:14:38 np0005545273 systemd[1]: var-lib-containers-storage-overlay-3cbbf0f7d20f1bfd0432ae6d9f0f94fc52bf857134aec9b83d7f1dd123c316ad-merged.mount: Deactivated successfully.
Dec  4 05:14:38 np0005545273 podman[76695]: 2025-12-04 10:14:38.454901124 +0000 UTC m=+6.162266936 container remove 20292e06a9163d8bc0c1008812b3dc58d75698cb942ba915d104e77fba5300f3 (image=quay.io/ceph/ceph:v20, name=blissful_lamarr, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  4 05:14:38 np0005545273 systemd[1]: libpod-conmon-20292e06a9163d8bc0c1008812b3dc58d75698cb942ba915d104e77fba5300f3.scope: Deactivated successfully.
Dec  4 05:14:38 np0005545273 podman[77160]: 2025-12-04 10:14:38.526695326 +0000 UTC m=+0.046868774 container create 9a114e3dea1f9252076a4ba297acde903805115902c3d76c4ec37575087fc23c (image=quay.io/ceph/ceph:v20, name=serene_herschel, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:14:38 np0005545273 systemd[1]: Started libpod-conmon-9a114e3dea1f9252076a4ba297acde903805115902c3d76c4ec37575087fc23c.scope.
Dec  4 05:14:38 np0005545273 podman[77160]: 2025-12-04 10:14:38.505533555 +0000 UTC m=+0.025706963 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:38 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:38 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08039ed1e5cf4e4a1b3fb24564d6c6f6e1450e49402ee15123d46cb961e96e78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:38 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08039ed1e5cf4e4a1b3fb24564d6c6f6e1450e49402ee15123d46cb961e96e78/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:38 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08039ed1e5cf4e4a1b3fb24564d6c6f6e1450e49402ee15123d46cb961e96e78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:38 np0005545273 podman[77160]: 2025-12-04 10:14:38.624670751 +0000 UTC m=+0.144844229 container init 9a114e3dea1f9252076a4ba297acde903805115902c3d76c4ec37575087fc23c (image=quay.io/ceph/ceph:v20, name=serene_herschel, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:14:38 np0005545273 podman[77160]: 2025-12-04 10:14:38.634930756 +0000 UTC m=+0.155104164 container start 9a114e3dea1f9252076a4ba297acde903805115902c3d76c4ec37575087fc23c (image=quay.io/ceph/ceph:v20, name=serene_herschel, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec  4 05:14:38 np0005545273 podman[77160]: 2025-12-04 10:14:38.63907086 +0000 UTC m=+0.159244308 container attach 9a114e3dea1f9252076a4ba297acde903805115902c3d76c4ec37575087fc23c (image=quay.io/ceph/ceph:v20, name=serene_herschel, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  4 05:14:38 np0005545273 ceph-mgr[75651]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  4 05:14:39 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 05:14:39 np0005545273 ceph-mgr[75651]: [cephadm INFO root] Saving service mon spec with placement count:5
Dec  4 05:14:39 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Dec  4 05:14:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec  4 05:14:39 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:39 np0005545273 serene_herschel[77178]: Scheduled mon update...
Dec  4 05:14:39 np0005545273 systemd[1]: libpod-9a114e3dea1f9252076a4ba297acde903805115902c3d76c4ec37575087fc23c.scope: Deactivated successfully.
Dec  4 05:14:39 np0005545273 podman[77230]: 2025-12-04 10:14:39.205570605 +0000 UTC m=+0.030931999 container died 9a114e3dea1f9252076a4ba297acde903805115902c3d76c4ec37575087fc23c (image=quay.io/ceph/ceph:v20, name=serene_herschel, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec  4 05:14:39 np0005545273 systemd[1]: var-lib-containers-storage-overlay-08039ed1e5cf4e4a1b3fb24564d6c6f6e1450e49402ee15123d46cb961e96e78-merged.mount: Deactivated successfully.
Dec  4 05:14:39 np0005545273 podman[77230]: 2025-12-04 10:14:39.239472785 +0000 UTC m=+0.064834179 container remove 9a114e3dea1f9252076a4ba297acde903805115902c3d76c4ec37575087fc23c (image=quay.io/ceph/ceph:v20, name=serene_herschel, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:14:39 np0005545273 systemd[1]: libpod-conmon-9a114e3dea1f9252076a4ba297acde903805115902c3d76c4ec37575087fc23c.scope: Deactivated successfully.
Dec  4 05:14:39 np0005545273 podman[77246]: 2025-12-04 10:14:39.309981896 +0000 UTC m=+0.042629790 container create 5fc18a6be8cf3fc7158234695d137aa20aa35ccb8e5c2fe002f280f674342cad (image=quay.io/ceph/ceph:v20, name=keen_cohen, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec  4 05:14:39 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:39 np0005545273 ceph-mon[75358]: Added host compute-0
Dec  4 05:14:39 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:39 np0005545273 systemd[1]: Started libpod-conmon-5fc18a6be8cf3fc7158234695d137aa20aa35ccb8e5c2fe002f280f674342cad.scope.
Dec  4 05:14:39 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:39 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37083f51f2b0ad4df1aa9907d3638a9ba02ebc92e36c947fec304f1ea6fd3cec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:39 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37083f51f2b0ad4df1aa9907d3638a9ba02ebc92e36c947fec304f1ea6fd3cec/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:39 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37083f51f2b0ad4df1aa9907d3638a9ba02ebc92e36c947fec304f1ea6fd3cec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:39 np0005545273 podman[77246]: 2025-12-04 10:14:39.379264803 +0000 UTC m=+0.111912697 container init 5fc18a6be8cf3fc7158234695d137aa20aa35ccb8e5c2fe002f280f674342cad (image=quay.io/ceph/ceph:v20, name=keen_cohen, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:14:39 np0005545273 podman[77246]: 2025-12-04 10:14:39.287334497 +0000 UTC m=+0.019982441 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:39 np0005545273 podman[77246]: 2025-12-04 10:14:39.389918915 +0000 UTC m=+0.122566809 container start 5fc18a6be8cf3fc7158234695d137aa20aa35ccb8e5c2fe002f280f674342cad (image=quay.io/ceph/ceph:v20, name=keen_cohen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec  4 05:14:39 np0005545273 podman[77246]: 2025-12-04 10:14:39.393768284 +0000 UTC m=+0.126416208 container attach 5fc18a6be8cf3fc7158234695d137aa20aa35ccb8e5c2fe002f280f674342cad (image=quay.io/ceph/ceph:v20, name=keen_cohen, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  4 05:14:39 np0005545273 podman[77213]: 2025-12-04 10:14:39.578502911 +0000 UTC m=+0.780647131 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:39 np0005545273 podman[77299]: 2025-12-04 10:14:39.69666637 +0000 UTC m=+0.048078597 container create 6cc9f7c3eff5b821a50f98c2d184ca4d5469a70c4a4d9f13eb7d1155e89f5eed (image=quay.io/ceph/ceph:v20, name=modest_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:14:39 np0005545273 systemd[1]: Started libpod-conmon-6cc9f7c3eff5b821a50f98c2d184ca4d5469a70c4a4d9f13eb7d1155e89f5eed.scope.
Dec  4 05:14:39 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:39 np0005545273 podman[77299]: 2025-12-04 10:14:39.676806773 +0000 UTC m=+0.028219030 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:39 np0005545273 podman[77299]: 2025-12-04 10:14:39.768580315 +0000 UTC m=+0.119992572 container init 6cc9f7c3eff5b821a50f98c2d184ca4d5469a70c4a4d9f13eb7d1155e89f5eed (image=quay.io/ceph/ceph:v20, name=modest_booth, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:14:39 np0005545273 podman[77299]: 2025-12-04 10:14:39.776643631 +0000 UTC m=+0.128055868 container start 6cc9f7c3eff5b821a50f98c2d184ca4d5469a70c4a4d9f13eb7d1155e89f5eed (image=quay.io/ceph/ceph:v20, name=modest_booth, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  4 05:14:39 np0005545273 podman[77299]: 2025-12-04 10:14:39.780206364 +0000 UTC m=+0.131618641 container attach 6cc9f7c3eff5b821a50f98c2d184ca4d5469a70c4a4d9f13eb7d1155e89f5eed (image=quay.io/ceph/ceph:v20, name=modest_booth, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  4 05:14:39 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 05:14:39 np0005545273 ceph-mgr[75651]: [cephadm INFO root] Saving service mgr spec with placement count:2
Dec  4 05:14:39 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Dec  4 05:14:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  4 05:14:39 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:39 np0005545273 keen_cohen[77262]: Scheduled mgr update...
Dec  4 05:14:39 np0005545273 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  4 05:14:39 np0005545273 systemd[1]: libpod-5fc18a6be8cf3fc7158234695d137aa20aa35ccb8e5c2fe002f280f674342cad.scope: Deactivated successfully.
Dec  4 05:14:39 np0005545273 conmon[77262]: conmon 5fc18a6be8cf3fc71582 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5fc18a6be8cf3fc7158234695d137aa20aa35ccb8e5c2fe002f280f674342cad.scope/container/memory.events
Dec  4 05:14:39 np0005545273 podman[77246]: 2025-12-04 10:14:39.872588068 +0000 UTC m=+0.605235962 container died 5fc18a6be8cf3fc7158234695d137aa20aa35ccb8e5c2fe002f280f674342cad (image=quay.io/ceph/ceph:v20, name=keen_cohen, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec  4 05:14:39 np0005545273 modest_booth[77315]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Dec  4 05:14:39 np0005545273 systemd[1]: libpod-6cc9f7c3eff5b821a50f98c2d184ca4d5469a70c4a4d9f13eb7d1155e89f5eed.scope: Deactivated successfully.
Dec  4 05:14:39 np0005545273 podman[77299]: 2025-12-04 10:14:39.896017211 +0000 UTC m=+0.247429438 container died 6cc9f7c3eff5b821a50f98c2d184ca4d5469a70c4a4d9f13eb7d1155e89f5eed (image=quay.io/ceph/ceph:v20, name=modest_booth, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:14:39 np0005545273 systemd[1]: var-lib-containers-storage-overlay-37083f51f2b0ad4df1aa9907d3638a9ba02ebc92e36c947fec304f1ea6fd3cec-merged.mount: Deactivated successfully.
Dec  4 05:14:39 np0005545273 podman[77246]: 2025-12-04 10:14:39.920616914 +0000 UTC m=+0.653264808 container remove 5fc18a6be8cf3fc7158234695d137aa20aa35ccb8e5c2fe002f280f674342cad (image=quay.io/ceph/ceph:v20, name=keen_cohen, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:14:39 np0005545273 systemd[1]: libpod-conmon-5fc18a6be8cf3fc7158234695d137aa20aa35ccb8e5c2fe002f280f674342cad.scope: Deactivated successfully.
Dec  4 05:14:39 np0005545273 systemd[1]: var-lib-containers-storage-overlay-74a336f0dd93ebeb58a2693bd41cf71fb59c22b1e208973e0951397de169b045-merged.mount: Deactivated successfully.
Dec  4 05:14:39 np0005545273 podman[77299]: 2025-12-04 10:14:39.955524783 +0000 UTC m=+0.306937040 container remove 6cc9f7c3eff5b821a50f98c2d184ca4d5469a70c4a4d9f13eb7d1155e89f5eed (image=quay.io/ceph/ceph:v20, name=modest_booth, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  4 05:14:39 np0005545273 systemd[1]: libpod-conmon-6cc9f7c3eff5b821a50f98c2d184ca4d5469a70c4a4d9f13eb7d1155e89f5eed.scope: Deactivated successfully.
Dec  4 05:14:39 np0005545273 podman[77343]: 2025-12-04 10:14:39.979982413 +0000 UTC m=+0.038679978 container create a46a521ff2b08307e89c53358b62255e51bdddbc6df25ef55cc9bf3a4fa31588 (image=quay.io/ceph/ceph:v20, name=nifty_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:14:40 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Dec  4 05:14:40 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:40 np0005545273 systemd[1]: Started libpod-conmon-a46a521ff2b08307e89c53358b62255e51bdddbc6df25ef55cc9bf3a4fa31588.scope.
Dec  4 05:14:40 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:40 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac64a5369934b20d4203704040de327b3a09cf057eef2f8cf01846b37af9b755/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:40 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac64a5369934b20d4203704040de327b3a09cf057eef2f8cf01846b37af9b755/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:40 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac64a5369934b20d4203704040de327b3a09cf057eef2f8cf01846b37af9b755/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:40 np0005545273 podman[77343]: 2025-12-04 10:14:40.057203494 +0000 UTC m=+0.115901089 container init a46a521ff2b08307e89c53358b62255e51bdddbc6df25ef55cc9bf3a4fa31588 (image=quay.io/ceph/ceph:v20, name=nifty_yalow, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec  4 05:14:40 np0005545273 podman[77343]: 2025-12-04 10:14:39.962807764 +0000 UTC m=+0.021505329 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:40 np0005545273 podman[77343]: 2025-12-04 10:14:40.062906986 +0000 UTC m=+0.121604571 container start a46a521ff2b08307e89c53358b62255e51bdddbc6df25ef55cc9bf3a4fa31588 (image=quay.io/ceph/ceph:v20, name=nifty_yalow, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:14:40 np0005545273 podman[77343]: 2025-12-04 10:14:40.067230734 +0000 UTC m=+0.125928289 container attach a46a521ff2b08307e89c53358b62255e51bdddbc6df25ef55cc9bf3a4fa31588 (image=quay.io/ceph/ceph:v20, name=nifty_yalow, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  4 05:14:40 np0005545273 ceph-mon[75358]: Saving service mon spec with placement count:5
Dec  4 05:14:40 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:40 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:40 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:14:40 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:40 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 05:14:40 np0005545273 ceph-mgr[75651]: [cephadm INFO root] Saving service crash spec with placement *
Dec  4 05:14:40 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Dec  4 05:14:40 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  4 05:14:40 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:40 np0005545273 nifty_yalow[77363]: Scheduled crash update...
Dec  4 05:14:40 np0005545273 systemd[1]: libpod-a46a521ff2b08307e89c53358b62255e51bdddbc6df25ef55cc9bf3a4fa31588.scope: Deactivated successfully.
Dec  4 05:14:40 np0005545273 podman[77343]: 2025-12-04 10:14:40.50389725 +0000 UTC m=+0.562594815 container died a46a521ff2b08307e89c53358b62255e51bdddbc6df25ef55cc9bf3a4fa31588 (image=quay.io/ceph/ceph:v20, name=nifty_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  4 05:14:40 np0005545273 systemd[1]: var-lib-containers-storage-overlay-ac64a5369934b20d4203704040de327b3a09cf057eef2f8cf01846b37af9b755-merged.mount: Deactivated successfully.
Dec  4 05:14:40 np0005545273 podman[77343]: 2025-12-04 10:14:40.543926111 +0000 UTC m=+0.602623656 container remove a46a521ff2b08307e89c53358b62255e51bdddbc6df25ef55cc9bf3a4fa31588 (image=quay.io/ceph/ceph:v20, name=nifty_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:14:40 np0005545273 systemd[1]: libpod-conmon-a46a521ff2b08307e89c53358b62255e51bdddbc6df25ef55cc9bf3a4fa31588.scope: Deactivated successfully.
Dec  4 05:14:40 np0005545273 podman[77517]: 2025-12-04 10:14:40.610579922 +0000 UTC m=+0.043402153 container create 7f5a1e8d7d5e8ac4ffb00d17d853b89ef6bd857d11d715a324b59d51643df865 (image=quay.io/ceph/ceph:v20, name=romantic_solomon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec  4 05:14:40 np0005545273 systemd[1]: Started libpod-conmon-7f5a1e8d7d5e8ac4ffb00d17d853b89ef6bd857d11d715a324b59d51643df865.scope.
Dec  4 05:14:40 np0005545273 ceph-mgr[75651]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  4 05:14:40 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:40 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6aa2031bd071f8c042b08ead94904c572fef56292ab0bad56107991f6f633784/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:40 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6aa2031bd071f8c042b08ead94904c572fef56292ab0bad56107991f6f633784/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:40 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6aa2031bd071f8c042b08ead94904c572fef56292ab0bad56107991f6f633784/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:40 np0005545273 podman[77517]: 2025-12-04 10:14:40.689066066 +0000 UTC m=+0.121888337 container init 7f5a1e8d7d5e8ac4ffb00d17d853b89ef6bd857d11d715a324b59d51643df865 (image=quay.io/ceph/ceph:v20, name=romantic_solomon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:14:40 np0005545273 podman[77517]: 2025-12-04 10:14:40.594341149 +0000 UTC m=+0.027163420 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:40 np0005545273 podman[77517]: 2025-12-04 10:14:40.696769454 +0000 UTC m=+0.129591705 container start 7f5a1e8d7d5e8ac4ffb00d17d853b89ef6bd857d11d715a324b59d51643df865 (image=quay.io/ceph/ceph:v20, name=romantic_solomon, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  4 05:14:40 np0005545273 podman[77517]: 2025-12-04 10:14:40.703131299 +0000 UTC m=+0.135953580 container attach 7f5a1e8d7d5e8ac4ffb00d17d853b89ef6bd857d11d715a324b59d51643df865 (image=quay.io/ceph/ceph:v20, name=romantic_solomon, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:14:41 np0005545273 podman[77603]: 2025-12-04 10:14:41.068816855 +0000 UTC m=+0.085469730 container exec 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  4 05:14:41 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Dec  4 05:14:41 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4136323843' entity='client.admin' 
Dec  4 05:14:41 np0005545273 podman[77517]: 2025-12-04 10:14:41.144245164 +0000 UTC m=+0.577067395 container died 7f5a1e8d7d5e8ac4ffb00d17d853b89ef6bd857d11d715a324b59d51643df865 (image=quay.io/ceph/ceph:v20, name=romantic_solomon, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  4 05:14:41 np0005545273 systemd[1]: libpod-7f5a1e8d7d5e8ac4ffb00d17d853b89ef6bd857d11d715a324b59d51643df865.scope: Deactivated successfully.
Dec  4 05:14:41 np0005545273 systemd[1]: var-lib-containers-storage-overlay-6aa2031bd071f8c042b08ead94904c572fef56292ab0bad56107991f6f633784-merged.mount: Deactivated successfully.
Dec  4 05:14:41 np0005545273 podman[77517]: 2025-12-04 10:14:41.184476358 +0000 UTC m=+0.617298589 container remove 7f5a1e8d7d5e8ac4ffb00d17d853b89ef6bd857d11d715a324b59d51643df865 (image=quay.io/ceph/ceph:v20, name=romantic_solomon, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec  4 05:14:41 np0005545273 systemd[1]: libpod-conmon-7f5a1e8d7d5e8ac4ffb00d17d853b89ef6bd857d11d715a324b59d51643df865.scope: Deactivated successfully.
Dec  4 05:14:41 np0005545273 podman[77603]: 2025-12-04 10:14:41.21451519 +0000 UTC m=+0.231168035 container exec_died 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  4 05:14:41 np0005545273 podman[77638]: 2025-12-04 10:14:41.251859713 +0000 UTC m=+0.040063783 container create b3751017f05854c529ffc8631fa2475a44437eb97b95bec401fbdfcd16a2a5a8 (image=quay.io/ceph/ceph:v20, name=zealous_cray, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:14:41 np0005545273 systemd[1]: Started libpod-conmon-b3751017f05854c529ffc8631fa2475a44437eb97b95bec401fbdfcd16a2a5a8.scope.
Dec  4 05:14:41 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:41 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b35d2607b3ca7e62076fe898742ae75275de30062ac6054d34c7d14e48b33a0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:41 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b35d2607b3ca7e62076fe898742ae75275de30062ac6054d34c7d14e48b33a0b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:41 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b35d2607b3ca7e62076fe898742ae75275de30062ac6054d34c7d14e48b33a0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:41 np0005545273 podman[77638]: 2025-12-04 10:14:41.232379692 +0000 UTC m=+0.020583782 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:41 np0005545273 podman[77638]: 2025-12-04 10:14:41.332721558 +0000 UTC m=+0.120925628 container init b3751017f05854c529ffc8631fa2475a44437eb97b95bec401fbdfcd16a2a5a8 (image=quay.io/ceph/ceph:v20, name=zealous_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:14:41 np0005545273 podman[77638]: 2025-12-04 10:14:41.339579592 +0000 UTC m=+0.127783672 container start b3751017f05854c529ffc8631fa2475a44437eb97b95bec401fbdfcd16a2a5a8 (image=quay.io/ceph/ceph:v20, name=zealous_cray, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:14:41 np0005545273 podman[77638]: 2025-12-04 10:14:41.343264579 +0000 UTC m=+0.131468669 container attach b3751017f05854c529ffc8631fa2475a44437eb97b95bec401fbdfcd16a2a5a8 (image=quay.io/ceph/ceph:v20, name=zealous_cray, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:14:41 np0005545273 ceph-mon[75358]: Saving service mgr spec with placement count:2
Dec  4 05:14:41 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:41 np0005545273 ceph-mon[75358]: Saving service crash spec with placement *
Dec  4 05:14:41 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:41 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/4136323843' entity='client.admin' 
Dec  4 05:14:41 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:14:41 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:41 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054699 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:14:41 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 05:14:41 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Dec  4 05:14:41 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:41 np0005545273 systemd[1]: libpod-b3751017f05854c529ffc8631fa2475a44437eb97b95bec401fbdfcd16a2a5a8.scope: Deactivated successfully.
Dec  4 05:14:41 np0005545273 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  4 05:14:41 np0005545273 podman[77786]: 2025-12-04 10:14:41.874624479 +0000 UTC m=+0.030648473 container died b3751017f05854c529ffc8631fa2475a44437eb97b95bec401fbdfcd16a2a5a8 (image=quay.io/ceph/ceph:v20, name=zealous_cray, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec  4 05:14:41 np0005545273 systemd[1]: var-lib-containers-storage-overlay-b35d2607b3ca7e62076fe898742ae75275de30062ac6054d34c7d14e48b33a0b-merged.mount: Deactivated successfully.
Dec  4 05:14:41 np0005545273 podman[77786]: 2025-12-04 10:14:41.918142033 +0000 UTC m=+0.074165967 container remove b3751017f05854c529ffc8631fa2475a44437eb97b95bec401fbdfcd16a2a5a8 (image=quay.io/ceph/ceph:v20, name=zealous_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:14:41 np0005545273 systemd[1]: libpod-conmon-b3751017f05854c529ffc8631fa2475a44437eb97b95bec401fbdfcd16a2a5a8.scope: Deactivated successfully.
Dec  4 05:14:41 np0005545273 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77810 (sysctl)
Dec  4 05:14:41 np0005545273 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec  4 05:14:41 np0005545273 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec  4 05:14:41 np0005545273 podman[77811]: 2025-12-04 10:14:41.997131566 +0000 UTC m=+0.050382678 container create 43dec163a4101b2839492231ea35a71e36febaf83c8b90fec725c25f681c09d8 (image=quay.io/ceph/ceph:v20, name=vibrant_wozniak, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec  4 05:14:42 np0005545273 systemd[1]: Started libpod-conmon-43dec163a4101b2839492231ea35a71e36febaf83c8b90fec725c25f681c09d8.scope.
Dec  4 05:14:42 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:42 np0005545273 podman[77811]: 2025-12-04 10:14:41.979908627 +0000 UTC m=+0.033159749 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:42 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d85d3e35e152d7202f80b284c65e04e5e1b07f282a71d246f4053fbe2688e2a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:42 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d85d3e35e152d7202f80b284c65e04e5e1b07f282a71d246f4053fbe2688e2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:42 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d85d3e35e152d7202f80b284c65e04e5e1b07f282a71d246f4053fbe2688e2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:42 np0005545273 podman[77811]: 2025-12-04 10:14:42.188368071 +0000 UTC m=+0.241619203 container init 43dec163a4101b2839492231ea35a71e36febaf83c8b90fec725c25f681c09d8 (image=quay.io/ceph/ceph:v20, name=vibrant_wozniak, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  4 05:14:42 np0005545273 podman[77811]: 2025-12-04 10:14:42.196223622 +0000 UTC m=+0.249474734 container start 43dec163a4101b2839492231ea35a71e36febaf83c8b90fec725c25f681c09d8 (image=quay.io/ceph/ceph:v20, name=vibrant_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec  4 05:14:42 np0005545273 podman[77811]: 2025-12-04 10:14:42.199495161 +0000 UTC m=+0.252746273 container attach 43dec163a4101b2839492231ea35a71e36febaf83c8b90fec725c25f681c09d8 (image=quay.io/ceph/ceph:v20, name=vibrant_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:14:42 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:42 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:42 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 05:14:42 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  4 05:14:42 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:42 np0005545273 ceph-mgr[75651]: [cephadm INFO root] Added label _admin to host compute-0
Dec  4 05:14:42 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Dec  4 05:14:42 np0005545273 vibrant_wozniak[77832]: Added label _admin to host compute-0
Dec  4 05:14:42 np0005545273 systemd[1]: libpod-43dec163a4101b2839492231ea35a71e36febaf83c8b90fec725c25f681c09d8.scope: Deactivated successfully.
Dec  4 05:14:42 np0005545273 podman[77811]: 2025-12-04 10:14:42.613938126 +0000 UTC m=+0.667189248 container died 43dec163a4101b2839492231ea35a71e36febaf83c8b90fec725c25f681c09d8 (image=quay.io/ceph/ceph:v20, name=vibrant_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  4 05:14:42 np0005545273 systemd[1]: var-lib-containers-storage-overlay-9d85d3e35e152d7202f80b284c65e04e5e1b07f282a71d246f4053fbe2688e2a-merged.mount: Deactivated successfully.
Dec  4 05:14:42 np0005545273 podman[77811]: 2025-12-04 10:14:42.650127878 +0000 UTC m=+0.703378990 container remove 43dec163a4101b2839492231ea35a71e36febaf83c8b90fec725c25f681c09d8 (image=quay.io/ceph/ceph:v20, name=vibrant_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  4 05:14:42 np0005545273 ceph-mgr[75651]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  4 05:14:42 np0005545273 systemd[1]: libpod-conmon-43dec163a4101b2839492231ea35a71e36febaf83c8b90fec725c25f681c09d8.scope: Deactivated successfully.
Dec  4 05:14:42 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:14:42 np0005545273 podman[77949]: 2025-12-04 10:14:42.721232639 +0000 UTC m=+0.043237770 container create e7ad41d9674a383cb4d015b5edb07fd094ba02ff498785bea73eedde9c6c61ec (image=quay.io/ceph/ceph:v20, name=fervent_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec  4 05:14:42 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:42 np0005545273 systemd[1]: Started libpod-conmon-e7ad41d9674a383cb4d015b5edb07fd094ba02ff498785bea73eedde9c6c61ec.scope.
Dec  4 05:14:42 np0005545273 podman[77949]: 2025-12-04 10:14:42.700664668 +0000 UTC m=+0.022669669 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:42 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:42 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b06c8748a413738f66867ec4396b32fe55d3b61d29bf282739e681fd1fe55649/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:42 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b06c8748a413738f66867ec4396b32fe55d3b61d29bf282739e681fd1fe55649/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:42 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b06c8748a413738f66867ec4396b32fe55d3b61d29bf282739e681fd1fe55649/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:42 np0005545273 podman[77949]: 2025-12-04 10:14:42.825084339 +0000 UTC m=+0.147089330 container init e7ad41d9674a383cb4d015b5edb07fd094ba02ff498785bea73eedde9c6c61ec (image=quay.io/ceph/ceph:v20, name=fervent_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:14:42 np0005545273 podman[77949]: 2025-12-04 10:14:42.83289458 +0000 UTC m=+0.154899551 container start e7ad41d9674a383cb4d015b5edb07fd094ba02ff498785bea73eedde9c6c61ec (image=quay.io/ceph/ceph:v20, name=fervent_chebyshev, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Dec  4 05:14:42 np0005545273 podman[77949]: 2025-12-04 10:14:42.836322962 +0000 UTC m=+0.158327933 container attach e7ad41d9674a383cb4d015b5edb07fd094ba02ff498785bea73eedde9c6c61ec (image=quay.io/ceph/ceph:v20, name=fervent_chebyshev, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Dec  4 05:14:43 np0005545273 podman[78057]: 2025-12-04 10:14:43.157809213 +0000 UTC m=+0.042989466 container create 4fb87b8c5b86b15b18274721efa2dcaa28e12971b957484094df58c892f694e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_tu, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec  4 05:14:43 np0005545273 systemd[1]: Started libpod-conmon-4fb87b8c5b86b15b18274721efa2dcaa28e12971b957484094df58c892f694e8.scope.
Dec  4 05:14:43 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:43 np0005545273 podman[78057]: 2025-12-04 10:14:43.134037135 +0000 UTC m=+0.019217398 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:14:43 np0005545273 podman[78057]: 2025-12-04 10:14:43.237651911 +0000 UTC m=+0.122832184 container init 4fb87b8c5b86b15b18274721efa2dcaa28e12971b957484094df58c892f694e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:14:43 np0005545273 podman[78057]: 2025-12-04 10:14:43.244372832 +0000 UTC m=+0.129553085 container start 4fb87b8c5b86b15b18274721efa2dcaa28e12971b957484094df58c892f694e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_tu, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:14:43 np0005545273 crazy_tu[78073]: 167 167
Dec  4 05:14:43 np0005545273 systemd[1]: libpod-4fb87b8c5b86b15b18274721efa2dcaa28e12971b957484094df58c892f694e8.scope: Deactivated successfully.
Dec  4 05:14:43 np0005545273 podman[78057]: 2025-12-04 10:14:43.248343333 +0000 UTC m=+0.133523596 container attach 4fb87b8c5b86b15b18274721efa2dcaa28e12971b957484094df58c892f694e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_tu, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec  4 05:14:43 np0005545273 podman[78057]: 2025-12-04 10:14:43.2486726 +0000 UTC m=+0.133852843 container died 4fb87b8c5b86b15b18274721efa2dcaa28e12971b957484094df58c892f694e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_tu, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:14:43 np0005545273 systemd[1]: var-lib-containers-storage-overlay-8f93a1bf0568521a92dafce3f2fe9774dcf55d63cdb22ce05b7c6bf145171a2d-merged.mount: Deactivated successfully.
Dec  4 05:14:43 np0005545273 podman[78057]: 2025-12-04 10:14:43.284452644 +0000 UTC m=+0.169632887 container remove 4fb87b8c5b86b15b18274721efa2dcaa28e12971b957484094df58c892f694e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_tu, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:14:43 np0005545273 systemd[1]: libpod-conmon-4fb87b8c5b86b15b18274721efa2dcaa28e12971b957484094df58c892f694e8.scope: Deactivated successfully.
Dec  4 05:14:43 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Dec  4 05:14:43 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2503549990' entity='client.admin' 
Dec  4 05:14:43 np0005545273 fervent_chebyshev[77993]: set mgr/dashboard/cluster/status
Dec  4 05:14:43 np0005545273 systemd[1]: libpod-e7ad41d9674a383cb4d015b5edb07fd094ba02ff498785bea73eedde9c6c61ec.scope: Deactivated successfully.
Dec  4 05:14:43 np0005545273 podman[77949]: 2025-12-04 10:14:43.439515667 +0000 UTC m=+0.761520638 container died e7ad41d9674a383cb4d015b5edb07fd094ba02ff498785bea73eedde9c6c61ec (image=quay.io/ceph/ceph:v20, name=fervent_chebyshev, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:14:43 np0005545273 systemd[1]: var-lib-containers-storage-overlay-b06c8748a413738f66867ec4396b32fe55d3b61d29bf282739e681fd1fe55649-merged.mount: Deactivated successfully.
Dec  4 05:14:43 np0005545273 podman[77949]: 2025-12-04 10:14:43.521225778 +0000 UTC m=+0.843230749 container remove e7ad41d9674a383cb4d015b5edb07fd094ba02ff498785bea73eedde9c6c61ec (image=quay.io/ceph/ceph:v20, name=fervent_chebyshev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec  4 05:14:43 np0005545273 systemd[1]: libpod-conmon-e7ad41d9674a383cb4d015b5edb07fd094ba02ff498785bea73eedde9c6c61ec.scope: Deactivated successfully.
Dec  4 05:14:43 np0005545273 systemd[1]: Reloading.
Dec  4 05:14:43 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:14:43 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:14:43 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:43 np0005545273 ceph-mon[75358]: Added label _admin to host compute-0
Dec  4 05:14:43 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:43 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/2503549990' entity='client.admin' 
Dec  4 05:14:43 np0005545273 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  4 05:14:44 np0005545273 podman[78150]: 2025-12-04 10:14:44.109393832 +0000 UTC m=+0.043464414 container create 7e8e8595684f873b9c7ab2aa1102eb3ab3554aa94dfecc25f9236f052a34f4bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_newton, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True)
Dec  4 05:14:44 np0005545273 systemd[1]: Started libpod-conmon-7e8e8595684f873b9c7ab2aa1102eb3ab3554aa94dfecc25f9236f052a34f4bd.scope.
Dec  4 05:14:44 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:44 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60340ecca03b877d3e65843db08def80aeb91f7d9253b7bde34b78ef2734c522/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:44 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60340ecca03b877d3e65843db08def80aeb91f7d9253b7bde34b78ef2734c522/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:44 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60340ecca03b877d3e65843db08def80aeb91f7d9253b7bde34b78ef2734c522/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:44 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60340ecca03b877d3e65843db08def80aeb91f7d9253b7bde34b78ef2734c522/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:44 np0005545273 podman[78150]: 2025-12-04 10:14:44.09205401 +0000 UTC m=+0.026124632 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:14:44 np0005545273 podman[78150]: 2025-12-04 10:14:44.195321561 +0000 UTC m=+0.129392163 container init 7e8e8595684f873b9c7ab2aa1102eb3ab3554aa94dfecc25f9236f052a34f4bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_newton, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:14:44 np0005545273 podman[78150]: 2025-12-04 10:14:44.203864224 +0000 UTC m=+0.137934826 container start 7e8e8595684f873b9c7ab2aa1102eb3ab3554aa94dfecc25f9236f052a34f4bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_newton, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:14:44 np0005545273 podman[78150]: 2025-12-04 10:14:44.207313546 +0000 UTC m=+0.141384118 container attach 7e8e8595684f873b9c7ab2aa1102eb3ab3554aa94dfecc25f9236f052a34f4bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_newton, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec  4 05:14:44 np0005545273 python3[78196]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:14:44 np0005545273 podman[78202]: 2025-12-04 10:14:44.568326059 +0000 UTC m=+0.059030714 container create cdc6c068f6ece949a26259fe080f61602a78dbbfa181f55a1387895c88accbb0 (image=quay.io/ceph/ceph:v20, name=loving_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec  4 05:14:44 np0005545273 systemd[1]: Started libpod-conmon-cdc6c068f6ece949a26259fe080f61602a78dbbfa181f55a1387895c88accbb0.scope.
Dec  4 05:14:44 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:44 np0005545273 podman[78202]: 2025-12-04 10:14:44.538911479 +0000 UTC m=+0.029616184 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:44 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adeec76d5f35643d64f93c9548a35a58751763521623d82aef676659f421c32f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:44 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adeec76d5f35643d64f93c9548a35a58751763521623d82aef676659f421c32f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:44 np0005545273 podman[78202]: 2025-12-04 10:14:44.652557586 +0000 UTC m=+0.143262231 container init cdc6c068f6ece949a26259fe080f61602a78dbbfa181f55a1387895c88accbb0 (image=quay.io/ceph/ceph:v20, name=loving_noether, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:14:44 np0005545273 ceph-mgr[75651]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  4 05:14:44 np0005545273 podman[78202]: 2025-12-04 10:14:44.661368585 +0000 UTC m=+0.152073210 container start cdc6c068f6ece949a26259fe080f61602a78dbbfa181f55a1387895c88accbb0 (image=quay.io/ceph/ceph:v20, name=loving_noether, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:14:44 np0005545273 podman[78202]: 2025-12-04 10:14:44.683498923 +0000 UTC m=+0.174203538 container attach cdc6c068f6ece949a26259fe080f61602a78dbbfa181f55a1387895c88accbb0 (image=quay.io/ceph/ceph:v20, name=loving_noether, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]: [
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:    {
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:        "available": false,
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:        "being_replaced": false,
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:        "ceph_device_lvm": false,
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:        "lsm_data": {},
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:        "lvs": [],
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:        "path": "/dev/sr0",
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:        "rejected_reasons": [
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:            "Has a FileSystem",
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:            "Insufficient space (<5GB)"
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:        ],
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:        "sys_api": {
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:            "actuators": null,
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:            "device_nodes": [
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:                "sr0"
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:            ],
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:            "devname": "sr0",
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:            "human_readable_size": "482.00 KB",
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:            "id_bus": "ata",
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:            "model": "QEMU DVD-ROM",
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:            "nr_requests": "2",
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:            "parent": "/dev/sr0",
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:            "partitions": {},
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:            "path": "/dev/sr0",
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:            "removable": "1",
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:            "rev": "2.5+",
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:            "ro": "0",
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:            "rotational": "1",
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:            "sas_address": "",
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:            "sas_device_handle": "",
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:            "scheduler_mode": "mq-deadline",
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:            "sectors": 0,
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:            "sectorsize": "2048",
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:            "size": 493568.0,
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:            "support_discard": "2048",
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:            "type": "disk",
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:            "vendor": "QEMU"
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:        }
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]:    }
Dec  4 05:14:44 np0005545273 mystifying_newton[78166]: ]
Dec  4 05:14:44 np0005545273 systemd[1]: libpod-7e8e8595684f873b9c7ab2aa1102eb3ab3554aa94dfecc25f9236f052a34f4bd.scope: Deactivated successfully.
Dec  4 05:14:44 np0005545273 podman[78150]: 2025-12-04 10:14:44.75825894 +0000 UTC m=+0.692329532 container died 7e8e8595684f873b9c7ab2aa1102eb3ab3554aa94dfecc25f9236f052a34f4bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_newton, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec  4 05:14:44 np0005545273 systemd[1]: var-lib-containers-storage-overlay-60340ecca03b877d3e65843db08def80aeb91f7d9253b7bde34b78ef2734c522-merged.mount: Deactivated successfully.
Dec  4 05:14:44 np0005545273 podman[78150]: 2025-12-04 10:14:44.870285708 +0000 UTC m=+0.804356290 container remove 7e8e8595684f873b9c7ab2aa1102eb3ab3554aa94dfecc25f9236f052a34f4bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_newton, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec  4 05:14:44 np0005545273 systemd[1]: libpod-conmon-7e8e8595684f873b9c7ab2aa1102eb3ab3554aa94dfecc25f9236f052a34f4bd.scope: Deactivated successfully.
Dec  4 05:14:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:14:44 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:14:44 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:14:44 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:14:44 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec  4 05:14:44 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec  4 05:14:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:14:44 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:14:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:14:44 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:14:44 np0005545273 ceph-mgr[75651]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec  4 05:14:44 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec  4 05:14:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Dec  4 05:14:45 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3564853227' entity='client.admin' 
Dec  4 05:14:45 np0005545273 systemd[1]: libpod-cdc6c068f6ece949a26259fe080f61602a78dbbfa181f55a1387895c88accbb0.scope: Deactivated successfully.
Dec  4 05:14:45 np0005545273 podman[78202]: 2025-12-04 10:14:45.095264221 +0000 UTC m=+0.585968906 container died cdc6c068f6ece949a26259fe080f61602a78dbbfa181f55a1387895c88accbb0 (image=quay.io/ceph/ceph:v20, name=loving_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:14:45 np0005545273 systemd[1]: var-lib-containers-storage-overlay-adeec76d5f35643d64f93c9548a35a58751763521623d82aef676659f421c32f-merged.mount: Deactivated successfully.
Dec  4 05:14:45 np0005545273 podman[78202]: 2025-12-04 10:14:45.157847827 +0000 UTC m=+0.648552482 container remove cdc6c068f6ece949a26259fe080f61602a78dbbfa181f55a1387895c88accbb0 (image=quay.io/ceph/ceph:v20, name=loving_noether, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  4 05:14:45 np0005545273 systemd[1]: libpod-conmon-cdc6c068f6ece949a26259fe080f61602a78dbbfa181f55a1387895c88accbb0.scope: Deactivated successfully.
Dec  4 05:14:45 np0005545273 ceph-mgr[75651]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/config/ceph.conf
Dec  4 05:14:45 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/config/ceph.conf
Dec  4 05:14:45 np0005545273 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  4 05:14:45 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:45 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:45 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:45 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:45 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec  4 05:14:45 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:14:45 np0005545273 ceph-mon[75358]: Updating compute-0:/etc/ceph/ceph.conf
Dec  4 05:14:45 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/3564853227' entity='client.admin' 
Dec  4 05:14:45 np0005545273 ceph-mon[75358]: Updating compute-0:/var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/config/ceph.conf
Dec  4 05:14:46 np0005545273 ansible-async_wrapper.py[79561]: Invoked with j52032110365 30 /home/zuul/.ansible/tmp/ansible-tmp-1764843285.524177-36393-56964339261068/AnsiballZ_command.py _
Dec  4 05:14:46 np0005545273 ansible-async_wrapper.py[79616]: Starting module and watcher
Dec  4 05:14:46 np0005545273 ansible-async_wrapper.py[79616]: Start watching 79617 (30)
Dec  4 05:14:46 np0005545273 ansible-async_wrapper.py[79617]: Start module (79617)
Dec  4 05:14:46 np0005545273 ansible-async_wrapper.py[79561]: Return async_wrapper task started.
Dec  4 05:14:46 np0005545273 ceph-mgr[75651]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  4 05:14:46 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  4 05:14:46 np0005545273 python3[79618]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:14:46 np0005545273 podman[79692]: 2025-12-04 10:14:46.418152958 +0000 UTC m=+0.051249884 container create c01dc34a6d2a6f00f4fde38d9e04b71eb4f1f782147b270505f718b803ed4a2e (image=quay.io/ceph/ceph:v20, name=musing_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:14:46 np0005545273 systemd[1]: Started libpod-conmon-c01dc34a6d2a6f00f4fde38d9e04b71eb4f1f782147b270505f718b803ed4a2e.scope.
Dec  4 05:14:46 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:46 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c58a1ffd35261111e7088b5821debf25250507007a72995c5a214708424a54/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:46 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c58a1ffd35261111e7088b5821debf25250507007a72995c5a214708424a54/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:46 np0005545273 podman[79692]: 2025-12-04 10:14:46.395857456 +0000 UTC m=+0.028954432 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:46 np0005545273 podman[79692]: 2025-12-04 10:14:46.495230256 +0000 UTC m=+0.128327212 container init c01dc34a6d2a6f00f4fde38d9e04b71eb4f1f782147b270505f718b803ed4a2e (image=quay.io/ceph/ceph:v20, name=musing_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:14:46 np0005545273 podman[79692]: 2025-12-04 10:14:46.502347705 +0000 UTC m=+0.135444641 container start c01dc34a6d2a6f00f4fde38d9e04b71eb4f1f782147b270505f718b803ed4a2e (image=quay.io/ceph/ceph:v20, name=musing_mahavira, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec  4 05:14:46 np0005545273 podman[79692]: 2025-12-04 10:14:46.505648464 +0000 UTC m=+0.138745440 container attach c01dc34a6d2a6f00f4fde38d9e04b71eb4f1f782147b270505f718b803ed4a2e (image=quay.io/ceph/ceph:v20, name=musing_mahavira, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:14:46 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:14:46 np0005545273 ceph-mgr[75651]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Dec  4 05:14:46 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  4 05:14:46 np0005545273 ceph-mon[75358]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec  4 05:14:46 np0005545273 ceph-mgr[75651]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/config/ceph.client.admin.keyring
Dec  4 05:14:46 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/config/ceph.client.admin.keyring
Dec  4 05:14:46 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  4 05:14:46 np0005545273 musing_mahavira[79751]: 
Dec  4 05:14:46 np0005545273 musing_mahavira[79751]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  4 05:14:46 np0005545273 systemd[1]: libpod-c01dc34a6d2a6f00f4fde38d9e04b71eb4f1f782147b270505f718b803ed4a2e.scope: Deactivated successfully.
Dec  4 05:14:46 np0005545273 podman[79692]: 2025-12-04 10:14:46.927677345 +0000 UTC m=+0.560774281 container died c01dc34a6d2a6f00f4fde38d9e04b71eb4f1f782147b270505f718b803ed4a2e (image=quay.io/ceph/ceph:v20, name=musing_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec  4 05:14:46 np0005545273 ceph-mon[75358]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  4 05:14:46 np0005545273 ceph-mon[75358]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec  4 05:14:46 np0005545273 ceph-mon[75358]: Updating compute-0:/var/lib/ceph/f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d/config/ceph.client.admin.keyring
Dec  4 05:14:46 np0005545273 systemd[1]: var-lib-containers-storage-overlay-70c58a1ffd35261111e7088b5821debf25250507007a72995c5a214708424a54-merged.mount: Deactivated successfully.
Dec  4 05:14:46 np0005545273 podman[79692]: 2025-12-04 10:14:46.965368624 +0000 UTC m=+0.598465560 container remove c01dc34a6d2a6f00f4fde38d9e04b71eb4f1f782147b270505f718b803ed4a2e (image=quay.io/ceph/ceph:v20, name=musing_mahavira, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  4 05:14:46 np0005545273 systemd[1]: libpod-conmon-c01dc34a6d2a6f00f4fde38d9e04b71eb4f1f782147b270505f718b803ed4a2e.scope: Deactivated successfully.
Dec  4 05:14:46 np0005545273 ansible-async_wrapper.py[79617]: Module complete (79617)
Dec  4 05:14:47 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:14:47 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:47 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:14:47 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:47 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:14:47 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:47 np0005545273 ceph-mgr[75651]: [progress INFO root] update: starting ev 4d36a4e2-6b74-4f7d-9d51-5dbcc8b76310 (Updating crash deployment (+1 -> 1))
Dec  4 05:14:47 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec  4 05:14:47 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec  4 05:14:47 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  4 05:14:47 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:14:47 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:14:47 np0005545273 ceph-mgr[75651]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Dec  4 05:14:47 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Dec  4 05:14:47 np0005545273 python3[80223]: ansible-ansible.legacy.async_status Invoked with jid=j52032110365.79561 mode=status _async_dir=/root/.ansible_async
Dec  4 05:14:47 np0005545273 podman[80256]: 2025-12-04 10:14:47.77788138 +0000 UTC m=+0.036466438 container create 7288ebaeba10d38d8ff26cd03302c20f936c2dcd552a03087b0696fe041e2e7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:14:47 np0005545273 systemd[1]: Started libpod-conmon-7288ebaeba10d38d8ff26cd03302c20f936c2dcd552a03087b0696fe041e2e7b.scope.
Dec  4 05:14:47 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:47 np0005545273 podman[80256]: 2025-12-04 10:14:47.846830602 +0000 UTC m=+0.105415700 container init 7288ebaeba10d38d8ff26cd03302c20f936c2dcd552a03087b0696fe041e2e7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec  4 05:14:47 np0005545273 podman[80256]: 2025-12-04 10:14:47.853755827 +0000 UTC m=+0.112340885 container start 7288ebaeba10d38d8ff26cd03302c20f936c2dcd552a03087b0696fe041e2e7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_agnesi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec  4 05:14:47 np0005545273 systemd[1]: libpod-7288ebaeba10d38d8ff26cd03302c20f936c2dcd552a03087b0696fe041e2e7b.scope: Deactivated successfully.
Dec  4 05:14:47 np0005545273 great_agnesi[80299]: 167 167
Dec  4 05:14:47 np0005545273 podman[80256]: 2025-12-04 10:14:47.762946971 +0000 UTC m=+0.021532039 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:14:47 np0005545273 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  4 05:14:47 np0005545273 podman[80256]: 2025-12-04 10:14:47.941995086 +0000 UTC m=+0.200580154 container attach 7288ebaeba10d38d8ff26cd03302c20f936c2dcd552a03087b0696fe041e2e7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_agnesi, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:14:47 np0005545273 podman[80256]: 2025-12-04 10:14:47.942367293 +0000 UTC m=+0.200952361 container died 7288ebaeba10d38d8ff26cd03302c20f936c2dcd552a03087b0696fe041e2e7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_agnesi, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  4 05:14:47 np0005545273 python3[80325]: ansible-ansible.legacy.async_status Invoked with jid=j52032110365.79561 mode=cleanup _async_dir=/root/.ansible_async
Dec  4 05:14:48 np0005545273 systemd[1]: var-lib-containers-storage-overlay-bcb4a812cddb849627bd1eaa35452d249231c35c6023a7a26f2d24356e5327e3-merged.mount: Deactivated successfully.
Dec  4 05:14:48 np0005545273 podman[80256]: 2025-12-04 10:14:48.172523788 +0000 UTC m=+0.431108836 container remove 7288ebaeba10d38d8ff26cd03302c20f936c2dcd552a03087b0696fe041e2e7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:14:48 np0005545273 systemd[1]: Reloading.
Dec  4 05:14:48 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:48 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:48 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:48 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec  4 05:14:48 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  4 05:14:48 np0005545273 ceph-mon[75358]: Deploying daemon crash.compute-0 on compute-0
Dec  4 05:14:48 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:14:48 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:14:48 np0005545273 systemd[1]: libpod-conmon-7288ebaeba10d38d8ff26cd03302c20f936c2dcd552a03087b0696fe041e2e7b.scope: Deactivated successfully.
Dec  4 05:14:48 np0005545273 systemd[1]: Reloading.
Dec  4 05:14:48 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:14:48 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:14:48 np0005545273 python3[80402]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  4 05:14:48 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  4 05:14:48 np0005545273 systemd[1]: Starting Ceph crash.compute-0 for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d...
Dec  4 05:14:49 np0005545273 podman[80492]: 2025-12-04 10:14:49.049164458 +0000 UTC m=+0.066560559 container create 821fa491a4b14740c6d07417995d4d9b3d35de1895c35846a9ad7417a8a950ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-crash-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  4 05:14:49 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95b282b380ce1e354883994df9c1f04d7f3aa6a707425c103170b52cda539916/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:49 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95b282b380ce1e354883994df9c1f04d7f3aa6a707425c103170b52cda539916/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:49 np0005545273 podman[80492]: 2025-12-04 10:14:49.013178151 +0000 UTC m=+0.030574332 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:14:49 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95b282b380ce1e354883994df9c1f04d7f3aa6a707425c103170b52cda539916/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:49 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95b282b380ce1e354883994df9c1f04d7f3aa6a707425c103170b52cda539916/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:49 np0005545273 podman[80492]: 2025-12-04 10:14:49.121129495 +0000 UTC m=+0.138525616 container init 821fa491a4b14740c6d07417995d4d9b3d35de1895c35846a9ad7417a8a950ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-crash-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  4 05:14:49 np0005545273 podman[80492]: 2025-12-04 10:14:49.136022734 +0000 UTC m=+0.153418825 container start 821fa491a4b14740c6d07417995d4d9b3d35de1895c35846a9ad7417a8a950ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-crash-compute-0, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:14:49 np0005545273 bash[80492]: 821fa491a4b14740c6d07417995d4d9b3d35de1895c35846a9ad7417a8a950ec
Dec  4 05:14:49 np0005545273 systemd[1]: Started Ceph crash.compute-0 for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d.
Dec  4 05:14:49 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-crash-compute-0[80519]: INFO:ceph-crash:pinging cluster to exercise our key
Dec  4 05:14:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:14:49 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:14:49 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  4 05:14:49 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:49 np0005545273 ceph-mgr[75651]: [progress INFO root] complete: finished ev 4d36a4e2-6b74-4f7d-9d51-5dbcc8b76310 (Updating crash deployment (+1 -> 1))
Dec  4 05:14:49 np0005545273 ceph-mgr[75651]: [progress INFO root] Completed event 4d36a4e2-6b74-4f7d-9d51-5dbcc8b76310 (Updating crash deployment (+1 -> 1)) in 2 seconds
Dec  4 05:14:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  4 05:14:49 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec  4 05:14:49 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:49 np0005545273 ceph-mgr[75651]: [progress INFO root] update: starting ev 3adc2683-100e-447d-9944-af48b9dc8b4a (Updating mgr deployment (+1 -> 2))
Dec  4 05:14:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.tucvmw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec  4 05:14:49 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.tucvmw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec  4 05:14:49 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.tucvmw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec  4 05:14:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  4 05:14:49 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "mgr services"} : dispatch
Dec  4 05:14:49 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:49 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:49 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:49 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:49 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:49 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.tucvmw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec  4 05:14:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:14:49 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:14:49 np0005545273 ceph-mgr[75651]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.tucvmw on compute-0
Dec  4 05:14:49 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.tucvmw on compute-0
Dec  4 05:14:49 np0005545273 python3[80536]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:14:49 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-crash-compute-0[80519]: 2025-12-04T10:14:49.312+0000 7f8d6e5f7640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec  4 05:14:49 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-crash-compute-0[80519]: 2025-12-04T10:14:49.312+0000 7f8d6e5f7640 -1 AuthRegistry(0x7f8d68052930) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec  4 05:14:49 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-crash-compute-0[80519]: 2025-12-04T10:14:49.314+0000 7f8d6e5f7640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec  4 05:14:49 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-crash-compute-0[80519]: 2025-12-04T10:14:49.314+0000 7f8d6e5f7640 -1 AuthRegistry(0x7f8d6e5f5fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec  4 05:14:49 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-crash-compute-0[80519]: 2025-12-04T10:14:49.314+0000 7f8d67fff640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec  4 05:14:49 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-crash-compute-0[80519]: 2025-12-04T10:14:49.315+0000 7f8d6e5f7640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Dec  4 05:14:49 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-crash-compute-0[80519]: [errno 13] RADOS permission denied (error connecting to the cluster)
Dec  4 05:14:49 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-crash-compute-0[80519]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Dec  4 05:14:49 np0005545273 podman[80545]: 2025-12-04 10:14:49.334384056 +0000 UTC m=+0.041833325 container create 185ab9763cfe8ec85a57a060c5faf8b44095c8c826b1260e108a5790748e463a (image=quay.io/ceph/ceph:v20, name=vigorous_kepler, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:14:49 np0005545273 systemd[1]: Started libpod-conmon-185ab9763cfe8ec85a57a060c5faf8b44095c8c826b1260e108a5790748e463a.scope.
Dec  4 05:14:49 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:49 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acaad59d0291f039f72f90602bd495eba2624768292f2ba9b93165ed6d2b005d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:49 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acaad59d0291f039f72f90602bd495eba2624768292f2ba9b93165ed6d2b005d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:49 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acaad59d0291f039f72f90602bd495eba2624768292f2ba9b93165ed6d2b005d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:49 np0005545273 podman[80545]: 2025-12-04 10:14:49.31571375 +0000 UTC m=+0.023163049 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:49 np0005545273 podman[80545]: 2025-12-04 10:14:49.42173598 +0000 UTC m=+0.129185249 container init 185ab9763cfe8ec85a57a060c5faf8b44095c8c826b1260e108a5790748e463a (image=quay.io/ceph/ceph:v20, name=vigorous_kepler, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec  4 05:14:49 np0005545273 podman[80545]: 2025-12-04 10:14:49.434807365 +0000 UTC m=+0.142256634 container start 185ab9763cfe8ec85a57a060c5faf8b44095c8c826b1260e108a5790748e463a (image=quay.io/ceph/ceph:v20, name=vigorous_kepler, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec  4 05:14:49 np0005545273 podman[80545]: 2025-12-04 10:14:49.439117812 +0000 UTC m=+0.146567081 container attach 185ab9763cfe8ec85a57a060c5faf8b44095c8c826b1260e108a5790748e463a (image=quay.io/ceph/ceph:v20, name=vigorous_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:14:49 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  4 05:14:49 np0005545273 vigorous_kepler[80614]: 
Dec  4 05:14:49 np0005545273 vigorous_kepler[80614]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  4 05:14:49 np0005545273 systemd[1]: libpod-185ab9763cfe8ec85a57a060c5faf8b44095c8c826b1260e108a5790748e463a.scope: Deactivated successfully.
Dec  4 05:14:49 np0005545273 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  4 05:14:49 np0005545273 podman[80680]: 2025-12-04 10:14:49.884732619 +0000 UTC m=+0.056983957 container create de841109a0f94187282b591e31444ba3d912e056ce7e2530e2af486645d5a2b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ritchie, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  4 05:14:49 np0005545273 podman[80695]: 2025-12-04 10:14:49.89923184 +0000 UTC m=+0.036525418 container died 185ab9763cfe8ec85a57a060c5faf8b44095c8c826b1260e108a5790748e463a (image=quay.io/ceph/ceph:v20, name=vigorous_kepler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:14:49 np0005545273 systemd[1]: Started libpod-conmon-de841109a0f94187282b591e31444ba3d912e056ce7e2530e2af486645d5a2b6.scope.
Dec  4 05:14:49 np0005545273 podman[80680]: 2025-12-04 10:14:49.858254452 +0000 UTC m=+0.030505870 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:14:49 np0005545273 systemd[1]: var-lib-containers-storage-overlay-acaad59d0291f039f72f90602bd495eba2624768292f2ba9b93165ed6d2b005d-merged.mount: Deactivated successfully.
Dec  4 05:14:49 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:49 np0005545273 podman[80695]: 2025-12-04 10:14:49.981492721 +0000 UTC m=+0.118786229 container remove 185ab9763cfe8ec85a57a060c5faf8b44095c8c826b1260e108a5790748e463a (image=quay.io/ceph/ceph:v20, name=vigorous_kepler, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Dec  4 05:14:49 np0005545273 podman[80680]: 2025-12-04 10:14:49.986466091 +0000 UTC m=+0.158717509 container init de841109a0f94187282b591e31444ba3d912e056ce7e2530e2af486645d5a2b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ritchie, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:14:49 np0005545273 systemd[1]: libpod-conmon-185ab9763cfe8ec85a57a060c5faf8b44095c8c826b1260e108a5790748e463a.scope: Deactivated successfully.
Dec  4 05:14:49 np0005545273 podman[80680]: 2025-12-04 10:14:49.99749385 +0000 UTC m=+0.169745188 container start de841109a0f94187282b591e31444ba3d912e056ce7e2530e2af486645d5a2b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ritchie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  4 05:14:50 np0005545273 podman[80680]: 2025-12-04 10:14:50.001499142 +0000 UTC m=+0.173750530 container attach de841109a0f94187282b591e31444ba3d912e056ce7e2530e2af486645d5a2b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True)
Dec  4 05:14:50 np0005545273 funny_ritchie[80714]: 167 167
Dec  4 05:14:50 np0005545273 systemd[1]: libpod-de841109a0f94187282b591e31444ba3d912e056ce7e2530e2af486645d5a2b6.scope: Deactivated successfully.
Dec  4 05:14:50 np0005545273 podman[80719]: 2025-12-04 10:14:50.050579726 +0000 UTC m=+0.029802878 container died de841109a0f94187282b591e31444ba3d912e056ce7e2530e2af486645d5a2b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:14:50 np0005545273 systemd[1]: var-lib-containers-storage-overlay-f5aa5a6578536324dc58b133b1ecb1335713a43a9d428fc1d2a6beeec4ccbdbb-merged.mount: Deactivated successfully.
Dec  4 05:14:50 np0005545273 podman[80719]: 2025-12-04 10:14:50.091860359 +0000 UTC m=+0.071083511 container remove de841109a0f94187282b591e31444ba3d912e056ce7e2530e2af486645d5a2b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  4 05:14:50 np0005545273 systemd[1]: libpod-conmon-de841109a0f94187282b591e31444ba3d912e056ce7e2530e2af486645d5a2b6.scope: Deactivated successfully.
Dec  4 05:14:50 np0005545273 systemd[1]: Reloading.
Dec  4 05:14:50 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:14:50 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:14:50 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.tucvmw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec  4 05:14:50 np0005545273 ceph-mon[75358]: Deploying daemon mgr.compute-0.tucvmw on compute-0
Dec  4 05:14:50 np0005545273 systemd[1]: Reloading.
Dec  4 05:14:50 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:14:50 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:14:50 np0005545273 python3[80798]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:14:50 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  4 05:14:50 np0005545273 podman[80836]: 2025-12-04 10:14:50.664186718 +0000 UTC m=+0.043528795 container create 68addea92e8e38e0e5715646f2bcc269be0a25091e58c4a999bd321cd546d4da (image=quay.io/ceph/ceph:v20, name=youthful_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Dec  4 05:14:50 np0005545273 podman[80836]: 2025-12-04 10:14:50.646873127 +0000 UTC m=+0.026215204 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:50 np0005545273 systemd[1]: Started libpod-conmon-68addea92e8e38e0e5715646f2bcc269be0a25091e58c4a999bd321cd546d4da.scope.
Dec  4 05:14:50 np0005545273 systemd[1]: Starting Ceph mgr.compute-0.tucvmw for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d...
Dec  4 05:14:50 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:50 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd79db9a608f51d0670d171ec7bbff0081a87f1296abc68e6ad83b71688b6c5a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:50 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd79db9a608f51d0670d171ec7bbff0081a87f1296abc68e6ad83b71688b6c5a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:50 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd79db9a608f51d0670d171ec7bbff0081a87f1296abc68e6ad83b71688b6c5a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:50 np0005545273 podman[80836]: 2025-12-04 10:14:50.811701895 +0000 UTC m=+0.191043972 container init 68addea92e8e38e0e5715646f2bcc269be0a25091e58c4a999bd321cd546d4da (image=quay.io/ceph/ceph:v20, name=youthful_neumann, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:14:50 np0005545273 podman[80836]: 2025-12-04 10:14:50.823199863 +0000 UTC m=+0.202541940 container start 68addea92e8e38e0e5715646f2bcc269be0a25091e58c4a999bd321cd546d4da (image=quay.io/ceph/ceph:v20, name=youthful_neumann, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  4 05:14:50 np0005545273 podman[80836]: 2025-12-04 10:14:50.827052592 +0000 UTC m=+0.206394679 container attach 68addea92e8e38e0e5715646f2bcc269be0a25091e58c4a999bd321cd546d4da (image=quay.io/ceph/ceph:v20, name=youthful_neumann, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  4 05:14:50 np0005545273 podman[80923]: 2025-12-04 10:14:50.988743085 +0000 UTC m=+0.044831159 container create 18b9dfcb34692973619cc8ce2749ee559a83a0ef4390c44db5255d8a72612b68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-tucvmw, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:14:51 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c43b2f3be40da5e7788d2368f5a7ef8f823fd8abdd106aa1c066115ed558e1dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:51 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c43b2f3be40da5e7788d2368f5a7ef8f823fd8abdd106aa1c066115ed558e1dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:51 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c43b2f3be40da5e7788d2368f5a7ef8f823fd8abdd106aa1c066115ed558e1dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:51 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c43b2f3be40da5e7788d2368f5a7ef8f823fd8abdd106aa1c066115ed558e1dc/merged/var/lib/ceph/mgr/ceph-compute-0.tucvmw supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:51 np0005545273 podman[80923]: 2025-12-04 10:14:51.053646433 +0000 UTC m=+0.109734527 container init 18b9dfcb34692973619cc8ce2749ee559a83a0ef4390c44db5255d8a72612b68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-tucvmw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  4 05:14:51 np0005545273 podman[80923]: 2025-12-04 10:14:51.062946471 +0000 UTC m=+0.119034545 container start 18b9dfcb34692973619cc8ce2749ee559a83a0ef4390c44db5255d8a72612b68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-tucvmw, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  4 05:14:51 np0005545273 podman[80923]: 2025-12-04 10:14:50.967837898 +0000 UTC m=+0.023926022 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:14:51 np0005545273 bash[80923]: 18b9dfcb34692973619cc8ce2749ee559a83a0ef4390c44db5255d8a72612b68
Dec  4 05:14:51 np0005545273 systemd[1]: Started Ceph mgr.compute-0.tucvmw for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d.
Dec  4 05:14:51 np0005545273 ceph-mgr[80942]: set uid:gid to 167:167 (ceph:ceph)
Dec  4 05:14:51 np0005545273 ceph-mgr[80942]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Dec  4 05:14:51 np0005545273 ceph-mgr[80942]: pidfile_write: ignore empty --pid-file
Dec  4 05:14:51 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:14:51 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:51 np0005545273 ceph-mgr[80942]: mgr[py] Loading python module 'alerts'
Dec  4 05:14:51 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:14:51 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:51 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  4 05:14:51 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:51 np0005545273 ceph-mgr[75651]: [progress INFO root] complete: finished ev 3adc2683-100e-447d-9944-af48b9dc8b4a (Updating mgr deployment (+1 -> 2))
Dec  4 05:14:51 np0005545273 ceph-mgr[75651]: [progress INFO root] Completed event 3adc2683-100e-447d-9944-af48b9dc8b4a (Updating mgr deployment (+1 -> 2)) in 2 seconds
Dec  4 05:14:51 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  4 05:14:51 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:51 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Dec  4 05:14:51 np0005545273 ansible-async_wrapper.py[79616]: Done in kid B.
Dec  4 05:14:51 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/602074459' entity='client.admin' 
Dec  4 05:14:51 np0005545273 systemd[1]: libpod-68addea92e8e38e0e5715646f2bcc269be0a25091e58c4a999bd321cd546d4da.scope: Deactivated successfully.
Dec  4 05:14:51 np0005545273 ceph-mgr[80942]: mgr[py] Loading python module 'balancer'
Dec  4 05:14:51 np0005545273 podman[80989]: 2025-12-04 10:14:51.281925795 +0000 UTC m=+0.024019114 container died 68addea92e8e38e0e5715646f2bcc269be0a25091e58c4a999bd321cd546d4da (image=quay.io/ceph/ceph:v20, name=youthful_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec  4 05:14:51 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:51 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:51 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:51 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:51 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/602074459' entity='client.admin' 
Dec  4 05:14:51 np0005545273 systemd[1]: var-lib-containers-storage-overlay-dd79db9a608f51d0670d171ec7bbff0081a87f1296abc68e6ad83b71688b6c5a-merged.mount: Deactivated successfully.
Dec  4 05:14:51 np0005545273 podman[80989]: 2025-12-04 10:14:51.321342835 +0000 UTC m=+0.063436134 container remove 68addea92e8e38e0e5715646f2bcc269be0a25091e58c4a999bd321cd546d4da (image=quay.io/ceph/ceph:v20, name=youthful_neumann, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:14:51 np0005545273 systemd[1]: libpod-conmon-68addea92e8e38e0e5715646f2bcc269be0a25091e58c4a999bd321cd546d4da.scope: Deactivated successfully.
Dec  4 05:14:51 np0005545273 ceph-mgr[80942]: mgr[py] Loading python module 'cephadm'
Dec  4 05:14:51 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:14:51 np0005545273 python3[81080]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:14:51 np0005545273 podman[81105]: 2025-12-04 10:14:51.696509152 +0000 UTC m=+0.029537933 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:51 np0005545273 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  4 05:14:51 np0005545273 podman[81105]: 2025-12-04 10:14:51.982454644 +0000 UTC m=+0.315483435 container create 5b215d48204813082d92890de9ef50dfa9ce2cba8494afb1827fedaed0b37335 (image=quay.io/ceph/ceph:v20, name=pensive_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:14:52 np0005545273 systemd[1]: Started libpod-conmon-5b215d48204813082d92890de9ef50dfa9ce2cba8494afb1827fedaed0b37335.scope.
Dec  4 05:14:52 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:52 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16b180e832dc98e1dd8848fa9fa4a0a7a3805bd68532969d832a0e4ba552727b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:52 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16b180e832dc98e1dd8848fa9fa4a0a7a3805bd68532969d832a0e4ba552727b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:52 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16b180e832dc98e1dd8848fa9fa4a0a7a3805bd68532969d832a0e4ba552727b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:52 np0005545273 podman[81105]: 2025-12-04 10:14:52.114234507 +0000 UTC m=+0.447263378 container init 5b215d48204813082d92890de9ef50dfa9ce2cba8494afb1827fedaed0b37335 (image=quay.io/ceph/ceph:v20, name=pensive_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:14:52 np0005545273 podman[81105]: 2025-12-04 10:14:52.126830444 +0000 UTC m=+0.459859245 container start 5b215d48204813082d92890de9ef50dfa9ce2cba8494afb1827fedaed0b37335 (image=quay.io/ceph/ceph:v20, name=pensive_burnell, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  4 05:14:52 np0005545273 podman[81105]: 2025-12-04 10:14:52.131192222 +0000 UTC m=+0.464221073 container attach 5b215d48204813082d92890de9ef50dfa9ce2cba8494afb1827fedaed0b37335 (image=quay.io/ceph/ceph:v20, name=pensive_burnell, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:14:52 np0005545273 ceph-mgr[80942]: mgr[py] Loading python module 'crash'
Dec  4 05:14:52 np0005545273 podman[81153]: 2025-12-04 10:14:52.215010482 +0000 UTC m=+0.084006884 container exec 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec  4 05:14:52 np0005545273 ceph-mgr[80942]: mgr[py] Loading python module 'dashboard'
Dec  4 05:14:52 np0005545273 podman[81153]: 2025-12-04 10:14:52.33424619 +0000 UTC m=+0.203242652 container exec_died 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:14:52 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Dec  4 05:14:52 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3658591949' entity='client.admin' 
Dec  4 05:14:52 np0005545273 systemd[1]: libpod-5b215d48204813082d92890de9ef50dfa9ce2cba8494afb1827fedaed0b37335.scope: Deactivated successfully.
Dec  4 05:14:52 np0005545273 podman[81105]: 2025-12-04 10:14:52.598888847 +0000 UTC m=+0.931917678 container died 5b215d48204813082d92890de9ef50dfa9ce2cba8494afb1827fedaed0b37335 (image=quay.io/ceph/ceph:v20, name=pensive_burnell, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:14:52 np0005545273 systemd[1]: var-lib-containers-storage-overlay-16b180e832dc98e1dd8848fa9fa4a0a7a3805bd68532969d832a0e4ba552727b-merged.mount: Deactivated successfully.
Dec  4 05:14:52 np0005545273 podman[81105]: 2025-12-04 10:14:52.63795099 +0000 UTC m=+0.970979751 container remove 5b215d48204813082d92890de9ef50dfa9ce2cba8494afb1827fedaed0b37335 (image=quay.io/ceph/ceph:v20, name=pensive_burnell, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  4 05:14:52 np0005545273 systemd[1]: libpod-conmon-5b215d48204813082d92890de9ef50dfa9ce2cba8494afb1827fedaed0b37335.scope: Deactivated successfully.
Dec  4 05:14:52 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  4 05:14:52 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:14:52 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:52 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:14:52 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:52 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:14:52 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:14:52 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:14:52 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:14:52 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:14:52 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:52 np0005545273 ceph-mgr[75651]: [progress INFO root] Writing back 2 completed events
Dec  4 05:14:52 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  4 05:14:52 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:52 np0005545273 ceph-mgr[75651]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Dec  4 05:14:52 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Dec  4 05:14:52 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec  4 05:14:52 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec  4 05:14:52 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec  4 05:14:52 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec  4 05:14:52 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:14:52 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:14:52 np0005545273 ceph-mgr[75651]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Dec  4 05:14:52 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Dec  4 05:14:53 np0005545273 ceph-mgr[80942]: mgr[py] Loading python module 'devicehealth'
Dec  4 05:14:53 np0005545273 python3[81344]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:14:53 np0005545273 ceph-mgr[80942]: mgr[py] Loading python module 'diskprediction_local'
Dec  4 05:14:53 np0005545273 podman[81396]: 2025-12-04 10:14:53.127253863 +0000 UTC m=+0.049629154 container create 16fa1b837848ad0aff6cf2e28c605120e7f5c05760ca7e87affe765a0dbb2dd6 (image=quay.io/ceph/ceph:v20, name=practical_bouman, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:14:53 np0005545273 systemd[1]: Started libpod-conmon-16fa1b837848ad0aff6cf2e28c605120e7f5c05760ca7e87affe765a0dbb2dd6.scope.
Dec  4 05:14:53 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:53 np0005545273 podman[81396]: 2025-12-04 10:14:53.104037406 +0000 UTC m=+0.026412707 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:53 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95fce9a95fbca0ac726a08b8e843a3d095a2770b264a69d520bb5960562d4bc2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:53 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95fce9a95fbca0ac726a08b8e843a3d095a2770b264a69d520bb5960562d4bc2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:53 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95fce9a95fbca0ac726a08b8e843a3d095a2770b264a69d520bb5960562d4bc2/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:53 np0005545273 podman[81396]: 2025-12-04 10:14:53.221078043 +0000 UTC m=+0.143453334 container init 16fa1b837848ad0aff6cf2e28c605120e7f5c05760ca7e87affe765a0dbb2dd6 (image=quay.io/ceph/ceph:v20, name=practical_bouman, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:14:53 np0005545273 podman[81396]: 2025-12-04 10:14:53.227317726 +0000 UTC m=+0.149693007 container start 16fa1b837848ad0aff6cf2e28c605120e7f5c05760ca7e87affe765a0dbb2dd6 (image=quay.io/ceph/ceph:v20, name=practical_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  4 05:14:53 np0005545273 podman[81396]: 2025-12-04 10:14:53.231628874 +0000 UTC m=+0.154004205 container attach 16fa1b837848ad0aff6cf2e28c605120e7f5c05760ca7e87affe765a0dbb2dd6 (image=quay.io/ceph/ceph:v20, name=practical_bouman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:14:53 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-tucvmw[80938]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  4 05:14:53 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-tucvmw[80938]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  4 05:14:53 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-tucvmw[80938]:  from numpy import show_config as show_numpy_config
Dec  4 05:14:53 np0005545273 ceph-mgr[80942]: mgr[py] Loading python module 'influx'
Dec  4 05:14:53 np0005545273 podman[81432]: 2025-12-04 10:14:53.298772633 +0000 UTC m=+0.036178133 container create ab3cff39204787114ed180f9dfdb492c292d1f02b2804e73459fec459973c36f (image=quay.io/ceph/ceph:v20, name=wonderful_payne, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec  4 05:14:53 np0005545273 systemd[1]: Started libpod-conmon-ab3cff39204787114ed180f9dfdb492c292d1f02b2804e73459fec459973c36f.scope.
Dec  4 05:14:53 np0005545273 ceph-mgr[80942]: mgr[py] Loading python module 'insights'
Dec  4 05:14:53 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:53 np0005545273 podman[81432]: 2025-12-04 10:14:53.377834017 +0000 UTC m=+0.115239517 container init ab3cff39204787114ed180f9dfdb492c292d1f02b2804e73459fec459973c36f (image=quay.io/ceph/ceph:v20, name=wonderful_payne, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:14:53 np0005545273 podman[81432]: 2025-12-04 10:14:53.282272536 +0000 UTC m=+0.019678056 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:53 np0005545273 podman[81432]: 2025-12-04 10:14:53.384048069 +0000 UTC m=+0.121453569 container start ab3cff39204787114ed180f9dfdb492c292d1f02b2804e73459fec459973c36f (image=quay.io/ceph/ceph:v20, name=wonderful_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec  4 05:14:53 np0005545273 podman[81432]: 2025-12-04 10:14:53.387204786 +0000 UTC m=+0.124610306 container attach ab3cff39204787114ed180f9dfdb492c292d1f02b2804e73459fec459973c36f (image=quay.io/ceph/ceph:v20, name=wonderful_payne, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:14:53 np0005545273 wonderful_payne[81467]: 167 167
Dec  4 05:14:53 np0005545273 systemd[1]: libpod-ab3cff39204787114ed180f9dfdb492c292d1f02b2804e73459fec459973c36f.scope: Deactivated successfully.
Dec  4 05:14:53 np0005545273 podman[81432]: 2025-12-04 10:14:53.390895471 +0000 UTC m=+0.128300971 container died ab3cff39204787114ed180f9dfdb492c292d1f02b2804e73459fec459973c36f (image=quay.io/ceph/ceph:v20, name=wonderful_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec  4 05:14:53 np0005545273 systemd[1]: var-lib-containers-storage-overlay-f379d8f8a275b0d075b42ed652a7ddb91d3bd6a0b35c58f1329fbbbc089723a4-merged.mount: Deactivated successfully.
Dec  4 05:14:53 np0005545273 ceph-mgr[80942]: mgr[py] Loading python module 'iostat'
Dec  4 05:14:53 np0005545273 podman[81432]: 2025-12-04 10:14:53.44019201 +0000 UTC m=+0.177597510 container remove ab3cff39204787114ed180f9dfdb492c292d1f02b2804e73459fec459973c36f (image=quay.io/ceph/ceph:v20, name=wonderful_payne, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  4 05:14:53 np0005545273 systemd[1]: libpod-conmon-ab3cff39204787114ed180f9dfdb492c292d1f02b2804e73459fec459973c36f.scope: Deactivated successfully.
Dec  4 05:14:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:14:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:14:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:53 np0005545273 ceph-mgr[75651]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.iwufnj (unknown last config time)...
Dec  4 05:14:53 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.iwufnj (unknown last config time)...
Dec  4 05:14:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.iwufnj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec  4 05:14:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.iwufnj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec  4 05:14:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  4 05:14:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "mgr services"} : dispatch
Dec  4 05:14:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:14:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:14:53 np0005545273 ceph-mgr[75651]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.iwufnj on compute-0
Dec  4 05:14:53 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.iwufnj on compute-0
Dec  4 05:14:53 np0005545273 ceph-mgr[80942]: mgr[py] Loading python module 'k8sevents'
Dec  4 05:14:53 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/3658591949' entity='client.admin' 
Dec  4 05:14:53 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:53 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:53 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:14:53 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:53 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:53 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec  4 05:14:53 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:53 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:53 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.iwufnj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec  4 05:14:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Dec  4 05:14:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3103592500' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Dec  4 05:14:53 np0005545273 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  4 05:14:53 np0005545273 ceph-mgr[80942]: mgr[py] Loading python module 'localpool'
Dec  4 05:14:53 np0005545273 podman[81552]: 2025-12-04 10:14:53.98875748 +0000 UTC m=+0.062576527 container create eeb34159c64d4832b02b173df048c481785754d64b3ea551f3776149440f9979 (image=quay.io/ceph/ceph:v20, name=eager_heyrovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:14:54 np0005545273 ceph-mgr[80942]: mgr[py] Loading python module 'mds_autoscaler'
Dec  4 05:14:54 np0005545273 systemd[1]: Started libpod-conmon-eeb34159c64d4832b02b173df048c481785754d64b3ea551f3776149440f9979.scope.
Dec  4 05:14:54 np0005545273 podman[81552]: 2025-12-04 10:14:53.967982297 +0000 UTC m=+0.041801364 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:54 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:54 np0005545273 podman[81552]: 2025-12-04 10:14:54.091276957 +0000 UTC m=+0.165096034 container init eeb34159c64d4832b02b173df048c481785754d64b3ea551f3776149440f9979 (image=quay.io/ceph/ceph:v20, name=eager_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True)
Dec  4 05:14:54 np0005545273 podman[81552]: 2025-12-04 10:14:54.102077611 +0000 UTC m=+0.175896648 container start eeb34159c64d4832b02b173df048c481785754d64b3ea551f3776149440f9979 (image=quay.io/ceph/ceph:v20, name=eager_heyrovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  4 05:14:54 np0005545273 podman[81552]: 2025-12-04 10:14:54.106333388 +0000 UTC m=+0.180152425 container attach eeb34159c64d4832b02b173df048c481785754d64b3ea551f3776149440f9979 (image=quay.io/ceph/ceph:v20, name=eager_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:14:54 np0005545273 eager_heyrovsky[81569]: 167 167
Dec  4 05:14:54 np0005545273 systemd[1]: libpod-eeb34159c64d4832b02b173df048c481785754d64b3ea551f3776149440f9979.scope: Deactivated successfully.
Dec  4 05:14:54 np0005545273 podman[81552]: 2025-12-04 10:14:54.111059954 +0000 UTC m=+0.184879011 container died eeb34159c64d4832b02b173df048c481785754d64b3ea551f3776149440f9979 (image=quay.io/ceph/ceph:v20, name=eager_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:14:54 np0005545273 systemd[1]: var-lib-containers-storage-overlay-e3c681765afbea1cc938e99b9ccc0c822666e576d2f0330c01c9ef3adae4a25b-merged.mount: Deactivated successfully.
Dec  4 05:14:54 np0005545273 podman[81552]: 2025-12-04 10:14:54.155398102 +0000 UTC m=+0.229217129 container remove eeb34159c64d4832b02b173df048c481785754d64b3ea551f3776149440f9979 (image=quay.io/ceph/ceph:v20, name=eager_heyrovsky, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  4 05:14:54 np0005545273 systemd[1]: libpod-conmon-eeb34159c64d4832b02b173df048c481785754d64b3ea551f3776149440f9979.scope: Deactivated successfully.
Dec  4 05:14:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:14:54 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:14:54 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:54 np0005545273 ceph-mgr[80942]: mgr[py] Loading python module 'mirroring'
Dec  4 05:14:54 np0005545273 ceph-mgr[80942]: mgr[py] Loading python module 'nfs'
Dec  4 05:14:54 np0005545273 ceph-mgr[80942]: mgr[py] Loading python module 'orchestrator'
Dec  4 05:14:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Dec  4 05:14:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  4 05:14:54 np0005545273 ceph-mon[75358]: Reconfiguring mon.compute-0 (unknown last config time)...
Dec  4 05:14:54 np0005545273 ceph-mon[75358]: Reconfiguring daemon mon.compute-0 on compute-0
Dec  4 05:14:54 np0005545273 ceph-mon[75358]: Reconfiguring mgr.compute-0.iwufnj (unknown last config time)...
Dec  4 05:14:54 np0005545273 ceph-mon[75358]: Reconfiguring daemon mgr.compute-0.iwufnj on compute-0
Dec  4 05:14:54 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/3103592500' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Dec  4 05:14:54 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:54 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:54 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3103592500' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec  4 05:14:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Dec  4 05:14:54 np0005545273 practical_bouman[81412]: set require_min_compat_client to mimic
Dec  4 05:14:54 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Dec  4 05:14:54 np0005545273 systemd[1]: libpod-16fa1b837848ad0aff6cf2e28c605120e7f5c05760ca7e87affe765a0dbb2dd6.scope: Deactivated successfully.
Dec  4 05:14:54 np0005545273 podman[81396]: 2025-12-04 10:14:54.63602579 +0000 UTC m=+1.558401081 container died 16fa1b837848ad0aff6cf2e28c605120e7f5c05760ca7e87affe765a0dbb2dd6 (image=quay.io/ceph/ceph:v20, name=practical_bouman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  4 05:14:54 np0005545273 systemd[1]: var-lib-containers-storage-overlay-95fce9a95fbca0ac726a08b8e843a3d095a2770b264a69d520bb5960562d4bc2-merged.mount: Deactivated successfully.
Dec  4 05:14:54 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  4 05:14:54 np0005545273 podman[81396]: 2025-12-04 10:14:54.68159568 +0000 UTC m=+1.603971001 container remove 16fa1b837848ad0aff6cf2e28c605120e7f5c05760ca7e87affe765a0dbb2dd6 (image=quay.io/ceph/ceph:v20, name=practical_bouman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:14:54 np0005545273 systemd[1]: libpod-conmon-16fa1b837848ad0aff6cf2e28c605120e7f5c05760ca7e87affe765a0dbb2dd6.scope: Deactivated successfully.
Dec  4 05:14:54 np0005545273 ceph-mgr[80942]: mgr[py] Loading python module 'osd_perf_query'
Dec  4 05:14:54 np0005545273 podman[81691]: 2025-12-04 10:14:54.84592116 +0000 UTC m=+0.080493841 container exec 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:14:54 np0005545273 ceph-mgr[80942]: mgr[py] Loading python module 'osd_support'
Dec  4 05:14:54 np0005545273 podman[81691]: 2025-12-04 10:14:54.971523152 +0000 UTC m=+0.206095773 container exec_died 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:14:54 np0005545273 ceph-mgr[80942]: mgr[py] Loading python module 'pg_autoscaler'
Dec  4 05:14:55 np0005545273 ceph-mgr[80942]: mgr[py] Loading python module 'progress'
Dec  4 05:14:55 np0005545273 ceph-mgr[80942]: mgr[py] Loading python module 'prometheus'
Dec  4 05:14:55 np0005545273 python3[81804]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:14:55 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:14:55 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:55 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:14:55 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:55 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:14:55 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:14:55 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:14:55 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:14:55 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:14:55 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:55 np0005545273 podman[81826]: 2025-12-04 10:14:55.441890754 +0000 UTC m=+0.042154950 container create 33104feb02c46adfe6fb3ecf6e0b6e0c10b0e933ea7645a207edbb158c232c05 (image=quay.io/ceph/ceph:v20, name=eloquent_tharp, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Dec  4 05:14:55 np0005545273 systemd[1]: Started libpod-conmon-33104feb02c46adfe6fb3ecf6e0b6e0c10b0e933ea7645a207edbb158c232c05.scope.
Dec  4 05:14:55 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:55 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8e698ff29c3f72ef970dd87a953badc87f67299df3e2c1a3224596f4fae3034/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:55 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8e698ff29c3f72ef970dd87a953badc87f67299df3e2c1a3224596f4fae3034/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:55 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8e698ff29c3f72ef970dd87a953badc87f67299df3e2c1a3224596f4fae3034/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:55 np0005545273 podman[81826]: 2025-12-04 10:14:55.504630144 +0000 UTC m=+0.104894370 container init 33104feb02c46adfe6fb3ecf6e0b6e0c10b0e933ea7645a207edbb158c232c05 (image=quay.io/ceph/ceph:v20, name=eloquent_tharp, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec  4 05:14:55 np0005545273 ceph-mgr[80942]: mgr[py] Loading python module 'rbd_support'
Dec  4 05:14:55 np0005545273 podman[81826]: 2025-12-04 10:14:55.514126096 +0000 UTC m=+0.114390292 container start 33104feb02c46adfe6fb3ecf6e0b6e0c10b0e933ea7645a207edbb158c232c05 (image=quay.io/ceph/ceph:v20, name=eloquent_tharp, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec  4 05:14:55 np0005545273 podman[81826]: 2025-12-04 10:14:55.517540467 +0000 UTC m=+0.117804683 container attach 33104feb02c46adfe6fb3ecf6e0b6e0c10b0e933ea7645a207edbb158c232c05 (image=quay.io/ceph/ceph:v20, name=eloquent_tharp, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  4 05:14:55 np0005545273 podman[81826]: 2025-12-04 10:14:55.428005594 +0000 UTC m=+0.028269820 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:55 np0005545273 ceph-mgr[80942]: mgr[py] Loading python module 'rgw'
Dec  4 05:14:55 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/3103592500' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec  4 05:14:55 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:55 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:55 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:14:55 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:55 np0005545273 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  4 05:14:55 np0005545273 ceph-mgr[80942]: mgr[py] Loading python module 'rook'
Dec  4 05:14:55 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 05:14:56 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  4 05:14:56 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:56 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  4 05:14:56 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:56 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  4 05:14:56 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:56 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  4 05:14:56 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:56 np0005545273 ceph-mgr[75651]: [cephadm INFO root] Added host compute-0
Dec  4 05:14:56 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Added host compute-0
Dec  4 05:14:56 np0005545273 ceph-mgr[75651]: [cephadm INFO root] Saving service mon spec with placement compute-0
Dec  4 05:14:56 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Dec  4 05:14:56 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec  4 05:14:56 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:14:56 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:14:56 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:14:56 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:14:56 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:56 np0005545273 ceph-mgr[75651]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Dec  4 05:14:56 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Dec  4 05:14:56 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  4 05:14:56 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:14:56 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:56 np0005545273 ceph-mgr[75651]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Dec  4 05:14:56 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Dec  4 05:14:56 np0005545273 ceph-mgr[75651]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Dec  4 05:14:56 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Dec  4 05:14:56 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Dec  4 05:14:56 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:56 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec  4 05:14:56 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:56 np0005545273 eloquent_tharp[81864]: Added host 'compute-0' with addr '192.168.122.100'
Dec  4 05:14:56 np0005545273 eloquent_tharp[81864]: Scheduled mon update...
Dec  4 05:14:56 np0005545273 eloquent_tharp[81864]: Scheduled mgr update...
Dec  4 05:14:56 np0005545273 eloquent_tharp[81864]: Scheduled osd.default_drive_group update...
Dec  4 05:14:56 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:56 np0005545273 ceph-mgr[75651]: [progress INFO root] update: starting ev 6872cb54-2e25-4297-bf9c-8149799b5fdd (Updating mgr deployment (-1 -> 1))
Dec  4 05:14:56 np0005545273 ceph-mgr[75651]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.tucvmw from compute-0 -- ports [8765]
Dec  4 05:14:56 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.tucvmw from compute-0 -- ports [8765]
Dec  4 05:14:56 np0005545273 systemd[1]: libpod-33104feb02c46adfe6fb3ecf6e0b6e0c10b0e933ea7645a207edbb158c232c05.scope: Deactivated successfully.
Dec  4 05:14:56 np0005545273 podman[81826]: 2025-12-04 10:14:56.453588177 +0000 UTC m=+1.053852373 container died 33104feb02c46adfe6fb3ecf6e0b6e0c10b0e933ea7645a207edbb158c232c05 (image=quay.io/ceph/ceph:v20, name=eloquent_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Dec  4 05:14:56 np0005545273 systemd[1]: var-lib-containers-storage-overlay-f8e698ff29c3f72ef970dd87a953badc87f67299df3e2c1a3224596f4fae3034-merged.mount: Deactivated successfully.
Dec  4 05:14:56 np0005545273 podman[81826]: 2025-12-04 10:14:56.496763295 +0000 UTC m=+1.097027501 container remove 33104feb02c46adfe6fb3ecf6e0b6e0c10b0e933ea7645a207edbb158c232c05 (image=quay.io/ceph/ceph:v20, name=eloquent_tharp, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:14:56 np0005545273 systemd[1]: libpod-conmon-33104feb02c46adfe6fb3ecf6e0b6e0c10b0e933ea7645a207edbb158c232c05.scope: Deactivated successfully.
Dec  4 05:14:56 np0005545273 ceph-mgr[80942]: mgr[py] Loading python module 'selftest'
Dec  4 05:14:56 np0005545273 ceph-mgr[80942]: mgr[py] Loading python module 'smb'
Dec  4 05:14:56 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:14:56 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  4 05:14:56 np0005545273 systemd[1]: Stopping Ceph mgr.compute-0.tucvmw for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d...
Dec  4 05:14:56 np0005545273 python3[82052]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:14:56 np0005545273 ceph-mgr[80942]: mgr[py] Loading python module 'snap_schedule'
Dec  4 05:14:57 np0005545273 podman[82079]: 2025-12-04 10:14:57.016208621 +0000 UTC m=+0.045485800 container create 5fb883146f40567fbf205995f56b9cb60f75655b8e986d0f8f28d5c82dec2e65 (image=quay.io/ceph/ceph:v20, name=objective_spence, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle)
Dec  4 05:14:57 np0005545273 ceph-mgr[80942]: mgr[py] Loading python module 'stats'
Dec  4 05:14:57 np0005545273 systemd[1]: Started libpod-conmon-5fb883146f40567fbf205995f56b9cb60f75655b8e986d0f8f28d5c82dec2e65.scope.
Dec  4 05:14:57 np0005545273 podman[82079]: 2025-12-04 10:14:56.997907342 +0000 UTC m=+0.027184541 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:14:57 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:57 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3156cdae8c223ea0fd24972c6b298027b77071040eccc5eb1c4378a8e23921ba/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:57 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3156cdae8c223ea0fd24972c6b298027b77071040eccc5eb1c4378a8e23921ba/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:57 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3156cdae8c223ea0fd24972c6b298027b77071040eccc5eb1c4378a8e23921ba/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:57 np0005545273 podman[82079]: 2025-12-04 10:14:57.119137645 +0000 UTC m=+0.148414934 container init 5fb883146f40567fbf205995f56b9cb60f75655b8e986d0f8f28d5c82dec2e65 (image=quay.io/ceph/ceph:v20, name=objective_spence, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:14:57 np0005545273 podman[82103]: 2025-12-04 10:14:57.121764642 +0000 UTC m=+0.107178711 container died 18b9dfcb34692973619cc8ce2749ee559a83a0ef4390c44db5255d8a72612b68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-tucvmw, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Dec  4 05:14:57 np0005545273 podman[82079]: 2025-12-04 10:14:57.128810509 +0000 UTC m=+0.158087728 container start 5fb883146f40567fbf205995f56b9cb60f75655b8e986d0f8f28d5c82dec2e65 (image=quay.io/ceph/ceph:v20, name=objective_spence, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:14:57 np0005545273 podman[82079]: 2025-12-04 10:14:57.135942128 +0000 UTC m=+0.165219307 container attach 5fb883146f40567fbf205995f56b9cb60f75655b8e986d0f8f28d5c82dec2e65 (image=quay.io/ceph/ceph:v20, name=objective_spence, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec  4 05:14:57 np0005545273 systemd[1]: var-lib-containers-storage-overlay-c43b2f3be40da5e7788d2368f5a7ef8f823fd8abdd106aa1c066115ed558e1dc-merged.mount: Deactivated successfully.
Dec  4 05:14:57 np0005545273 podman[82103]: 2025-12-04 10:14:57.187320393 +0000 UTC m=+0.172734432 container remove 18b9dfcb34692973619cc8ce2749ee559a83a0ef4390c44db5255d8a72612b68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-tucvmw, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  4 05:14:57 np0005545273 bash[82103]: ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-tucvmw
Dec  4 05:14:57 np0005545273 systemd[1]: ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d@mgr.compute-0.tucvmw.service: Main process exited, code=exited, status=143/n/a
Dec  4 05:14:57 np0005545273 systemd[1]: ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d@mgr.compute-0.tucvmw.service: Failed with result 'exit-code'.
Dec  4 05:14:57 np0005545273 systemd[1]: Stopped Ceph mgr.compute-0.tucvmw for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d.
Dec  4 05:14:57 np0005545273 systemd[1]: ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d@mgr.compute-0.tucvmw.service: Consumed 6.822s CPU time, 394.4M memory peak, read 0B from disk, written 964.0K to disk.
Dec  4 05:14:57 np0005545273 systemd[1]: Reloading.
Dec  4 05:14:57 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:57 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:57 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:57 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:57 np0005545273 ceph-mon[75358]: Added host compute-0
Dec  4 05:14:57 np0005545273 ceph-mon[75358]: Saving service mon spec with placement compute-0
Dec  4 05:14:57 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:14:57 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:57 np0005545273 ceph-mon[75358]: Saving service mgr spec with placement compute-0
Dec  4 05:14:57 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:57 np0005545273 ceph-mon[75358]: Marking host: compute-0 for OSDSpec preview refresh.
Dec  4 05:14:57 np0005545273 ceph-mon[75358]: Saving service osd.default_drive_group spec with placement compute-0
Dec  4 05:14:57 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:57 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:57 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:57 np0005545273 ceph-mon[75358]: Removing daemon mgr.compute-0.tucvmw from compute-0 -- ports [8765]
Dec  4 05:14:57 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:14:57 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:14:57 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec  4 05:14:57 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1388289462' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec  4 05:14:57 np0005545273 objective_spence[82118]: 
Dec  4 05:14:57 np0005545273 objective_spence[82118]: {"fsid":"f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":51,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-12-04T10:14:03:532003+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-12-04T10:14:03.534445+0000","services":{}},"progress_events":{"6872cb54-2e25-4297-bf9c-8149799b5fdd":{"message":"Updating mgr deployment (-1 -> 1) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Dec  4 05:14:57 np0005545273 systemd[1]: libpod-5fb883146f40567fbf205995f56b9cb60f75655b8e986d0f8f28d5c82dec2e65.scope: Deactivated successfully.
Dec  4 05:14:57 np0005545273 conmon[82118]: conmon 5fb883146f40567fbf20 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5fb883146f40567fbf205995f56b9cb60f75655b8e986d0f8f28d5c82dec2e65.scope/container/memory.events
Dec  4 05:14:57 np0005545273 podman[82079]: 2025-12-04 10:14:57.733741835 +0000 UTC m=+0.763019024 container died 5fb883146f40567fbf205995f56b9cb60f75655b8e986d0f8f28d5c82dec2e65 (image=quay.io/ceph/ceph:v20, name=objective_spence, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Dec  4 05:14:57 np0005545273 ceph-mgr[75651]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.tucvmw
Dec  4 05:14:57 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.tucvmw
Dec  4 05:14:57 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.tucvmw"} v 0)
Dec  4 05:14:57 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.tucvmw"} : dispatch
Dec  4 05:14:57 np0005545273 systemd[1]: var-lib-containers-storage-overlay-3156cdae8c223ea0fd24972c6b298027b77071040eccc5eb1c4378a8e23921ba-merged.mount: Deactivated successfully.
Dec  4 05:14:57 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.tucvmw"}]': finished
Dec  4 05:14:57 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  4 05:14:57 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:57 np0005545273 ceph-mgr[75651]: [progress INFO root] complete: finished ev 6872cb54-2e25-4297-bf9c-8149799b5fdd (Updating mgr deployment (-1 -> 1))
Dec  4 05:14:57 np0005545273 ceph-mgr[75651]: [progress INFO root] Completed event 6872cb54-2e25-4297-bf9c-8149799b5fdd (Updating mgr deployment (-1 -> 1)) in 1 seconds
Dec  4 05:14:57 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  4 05:14:57 np0005545273 podman[82079]: 2025-12-04 10:14:57.784089342 +0000 UTC m=+0.813366541 container remove 5fb883146f40567fbf205995f56b9cb60f75655b8e986d0f8f28d5c82dec2e65 (image=quay.io/ceph/ceph:v20, name=objective_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  4 05:14:57 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:57 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:14:57 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:14:57 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:14:57 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:14:57 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:14:57 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:14:57 np0005545273 systemd[1]: libpod-conmon-5fb883146f40567fbf205995f56b9cb60f75655b8e986d0f8f28d5c82dec2e65.scope: Deactivated successfully.
Dec  4 05:14:57 np0005545273 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  4 05:14:57 np0005545273 ceph-mgr[75651]: [progress INFO root] Writing back 3 completed events
Dec  4 05:14:57 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  4 05:14:57 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:14:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:14:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:14:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:14:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:14:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:14:58 np0005545273 podman[82297]: 2025-12-04 10:14:58.260454802 +0000 UTC m=+0.044365969 container create c5b133aeb0650999683361a2634e9d913679319c968c1dab6ade305242e66183 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_leavitt, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:14:58 np0005545273 systemd[1]: Started libpod-conmon-c5b133aeb0650999683361a2634e9d913679319c968c1dab6ade305242e66183.scope.
Dec  4 05:14:58 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:58 np0005545273 podman[82297]: 2025-12-04 10:14:58.240757998 +0000 UTC m=+0.024669175 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:14:58 np0005545273 podman[82297]: 2025-12-04 10:14:58.412807077 +0000 UTC m=+0.196718244 container init c5b133aeb0650999683361a2634e9d913679319c968c1dab6ade305242e66183 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:14:58 np0005545273 podman[82297]: 2025-12-04 10:14:58.424768923 +0000 UTC m=+0.208680090 container start c5b133aeb0650999683361a2634e9d913679319c968c1dab6ade305242e66183 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_leavitt, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:14:58 np0005545273 suspicious_leavitt[82313]: 167 167
Dec  4 05:14:58 np0005545273 systemd[1]: libpod-c5b133aeb0650999683361a2634e9d913679319c968c1dab6ade305242e66183.scope: Deactivated successfully.
Dec  4 05:14:58 np0005545273 podman[82297]: 2025-12-04 10:14:58.569300848 +0000 UTC m=+0.353212065 container attach c5b133aeb0650999683361a2634e9d913679319c968c1dab6ade305242e66183 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec  4 05:14:58 np0005545273 podman[82297]: 2025-12-04 10:14:58.570760444 +0000 UTC m=+0.354671641 container died c5b133aeb0650999683361a2634e9d913679319c968c1dab6ade305242e66183 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  4 05:14:58 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.tucvmw"} : dispatch
Dec  4 05:14:58 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.tucvmw"}]': finished
Dec  4 05:14:58 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:58 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:58 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:14:58 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:14:58 np0005545273 systemd[1]: var-lib-containers-storage-overlay-767fa2c31979eadff650e7dc84d7f1dd111c6fa3ad5b83ff549c39fe613e38a3-merged.mount: Deactivated successfully.
Dec  4 05:14:58 np0005545273 podman[82297]: 2025-12-04 10:14:58.623069676 +0000 UTC m=+0.406980843 container remove c5b133aeb0650999683361a2634e9d913679319c968c1dab6ade305242e66183 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:14:58 np0005545273 systemd[1]: libpod-conmon-c5b133aeb0650999683361a2634e9d913679319c968c1dab6ade305242e66183.scope: Deactivated successfully.
Dec  4 05:14:58 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  4 05:14:58 np0005545273 podman[82340]: 2025-12-04 10:14:58.843185521 +0000 UTC m=+0.061356274 container create e83e9a63b81a8e47477d5455ddd2fe7d9bbabdce07dbe66bf88b61cc35d67c5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:14:58 np0005545273 systemd[1]: Started libpod-conmon-e83e9a63b81a8e47477d5455ddd2fe7d9bbabdce07dbe66bf88b61cc35d67c5e.scope.
Dec  4 05:14:58 np0005545273 podman[82340]: 2025-12-04 10:14:58.812270298 +0000 UTC m=+0.030441131 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:14:58 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:14:58 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccddb484bf17bdb46fc418e393252390b6dd8f3f6d8e7f7ec57b64c60063c7c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:58 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccddb484bf17bdb46fc418e393252390b6dd8f3f6d8e7f7ec57b64c60063c7c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:58 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccddb484bf17bdb46fc418e393252390b6dd8f3f6d8e7f7ec57b64c60063c7c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:58 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccddb484bf17bdb46fc418e393252390b6dd8f3f6d8e7f7ec57b64c60063c7c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:58 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccddb484bf17bdb46fc418e393252390b6dd8f3f6d8e7f7ec57b64c60063c7c5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:14:58 np0005545273 podman[82340]: 2025-12-04 10:14:58.956547629 +0000 UTC m=+0.174718462 container init e83e9a63b81a8e47477d5455ddd2fe7d9bbabdce07dbe66bf88b61cc35d67c5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_elgamal, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec  4 05:14:58 np0005545273 podman[82340]: 2025-12-04 10:14:58.970701613 +0000 UTC m=+0.188872406 container start e83e9a63b81a8e47477d5455ddd2fe7d9bbabdce07dbe66bf88b61cc35d67c5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_elgamal, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:14:58 np0005545273 podman[82340]: 2025-12-04 10:14:58.976142085 +0000 UTC m=+0.194312938 container attach e83e9a63b81a8e47477d5455ddd2fe7d9bbabdce07dbe66bf88b61cc35d67c5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_elgamal, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  4 05:14:59 np0005545273 ceph-mon[75358]: Removing key for mgr.compute-0.tucvmw
Dec  4 05:14:59 np0005545273 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  4 05:14:59 np0005545273 exciting_elgamal[82356]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:14:59 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  4 05:14:59 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  4 05:14:59 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new d6d34217-6607-43be-80be-ae04b730142c
Dec  4 05:15:00 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "d6d34217-6607-43be-80be-ae04b730142c"} v 0)
Dec  4 05:15:00 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2507979783' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "d6d34217-6607-43be-80be-ae04b730142c"} : dispatch
Dec  4 05:15:00 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Dec  4 05:15:00 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  4 05:15:00 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  4 05:15:00 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2507979783' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d6d34217-6607-43be-80be-ae04b730142c"}]': finished
Dec  4 05:15:00 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Dec  4 05:15:00 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Dec  4 05:15:00 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  4 05:15:00 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec  4 05:15:00 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  4 05:15:01 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/2507979783' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "d6d34217-6607-43be-80be-ae04b730142c"} : dispatch
Dec  4 05:15:01 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/2507979783' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d6d34217-6607-43be-80be-ae04b730142c"}]': finished
Dec  4 05:15:01 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Dec  4 05:15:01 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Dec  4 05:15:01 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  4 05:15:01 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec  4 05:15:01 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Dec  4 05:15:01 np0005545273 lvm[82451]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:15:01 np0005545273 lvm[82451]: VG ceph_vg0 finished
Dec  4 05:15:01 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:15:01 np0005545273 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  4 05:15:01 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec  4 05:15:01 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3577417938' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec  4 05:15:01 np0005545273 exciting_elgamal[82356]: stderr: got monmap epoch 1
Dec  4 05:15:02 np0005545273 exciting_elgamal[82356]: --> Creating keyring file for osd.0
Dec  4 05:15:02 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Dec  4 05:15:02 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Dec  4 05:15:02 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid d6d34217-6607-43be-80be-ae04b730142c --setuser ceph --setgroup ceph
Dec  4 05:15:02 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  4 05:15:03 np0005545273 ceph-mon[75358]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec  4 05:15:03 np0005545273 ceph-mon[75358]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec  4 05:15:03 np0005545273 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  4 05:15:04 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  4 05:15:05 np0005545273 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  4 05:15:06 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  4 05:15:07 np0005545273 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  4 05:15:07 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:15:08 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  4 05:15:09 np0005545273 exciting_elgamal[82356]: stderr: 2025-12-04T10:15:02.111+0000 7f93c19bb8c0 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Dec  4 05:15:09 np0005545273 exciting_elgamal[82356]: stderr: 2025-12-04T10:15:02.135+0000 7f93c19bb8c0 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Dec  4 05:15:09 np0005545273 exciting_elgamal[82356]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Dec  4 05:15:09 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  4 05:15:09 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec  4 05:15:09 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec  4 05:15:09 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec  4 05:15:09 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  4 05:15:09 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  4 05:15:09 np0005545273 exciting_elgamal[82356]: --> ceph-volume lvm activate successful for osd ID: 0
Dec  4 05:15:09 np0005545273 exciting_elgamal[82356]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Dec  4 05:15:09 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  4 05:15:09 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  4 05:15:09 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 8cc1daa3-82be-4bdc-8e62-fc5001daf8bb
Dec  4 05:15:09 np0005545273 ceph-mon[75358]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec  4 05:15:09 np0005545273 ceph-mon[75358]: Cluster is now healthy
Dec  4 05:15:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb"} v 0)
Dec  4 05:15:09 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3633251648' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb"} : dispatch
Dec  4 05:15:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Dec  4 05:15:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  4 05:15:09 np0005545273 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  4 05:15:10 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3633251648' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb"}]': finished
Dec  4 05:15:10 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Dec  4 05:15:10 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Dec  4 05:15:10 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  4 05:15:10 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec  4 05:15:10 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  4 05:15:10 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec  4 05:15:10 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  4 05:15:10 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  4 05:15:10 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/3633251648' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb"} : dispatch
Dec  4 05:15:10 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/3633251648' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb"}]': finished
Dec  4 05:15:10 np0005545273 lvm[83398]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:15:10 np0005545273 lvm[83398]: VG ceph_vg1 finished
Dec  4 05:15:10 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Dec  4 05:15:10 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Dec  4 05:15:10 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec  4 05:15:10 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec  4 05:15:10 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Dec  4 05:15:10 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  4 05:15:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec  4 05:15:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1595169929' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec  4 05:15:11 np0005545273 exciting_elgamal[82356]: stderr: got monmap epoch 1
Dec  4 05:15:11 np0005545273 exciting_elgamal[82356]: --> Creating keyring file for osd.1
Dec  4 05:15:11 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Dec  4 05:15:11 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Dec  4 05:15:11 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 8cc1daa3-82be-4bdc-8e62-fc5001daf8bb --setuser ceph --setgroup ceph
Dec  4 05:15:11 np0005545273 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  4 05:15:12 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  4 05:15:12 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:15:13 np0005545273 exciting_elgamal[82356]: stderr: 2025-12-04T10:15:11.155+0000 7f7a3e26c8c0 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Dec  4 05:15:13 np0005545273 exciting_elgamal[82356]: stderr: 2025-12-04T10:15:11.172+0000 7f7a3e26c8c0 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Dec  4 05:15:13 np0005545273 exciting_elgamal[82356]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Dec  4 05:15:13 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  4 05:15:13 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec  4 05:15:13 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec  4 05:15:13 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec  4 05:15:13 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec  4 05:15:13 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  4 05:15:13 np0005545273 exciting_elgamal[82356]: --> ceph-volume lvm activate successful for osd ID: 1
Dec  4 05:15:13 np0005545273 exciting_elgamal[82356]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Dec  4 05:15:13 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  4 05:15:13 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  4 05:15:13 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 2ee6d319-dca2-4c06-9365-2240b94f11cb
Dec  4 05:15:13 np0005545273 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  4 05:15:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "2ee6d319-dca2-4c06-9365-2240b94f11cb"} v 0)
Dec  4 05:15:14 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1854585033' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "2ee6d319-dca2-4c06-9365-2240b94f11cb"} : dispatch
Dec  4 05:15:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Dec  4 05:15:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  4 05:15:14 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1854585033' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2ee6d319-dca2-4c06-9365-2240b94f11cb"}]': finished
Dec  4 05:15:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Dec  4 05:15:14 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Dec  4 05:15:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  4 05:15:14 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec  4 05:15:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  4 05:15:14 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec  4 05:15:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  4 05:15:14 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec  4 05:15:14 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  4 05:15:14 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  4 05:15:14 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  4 05:15:14 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/1854585033' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "2ee6d319-dca2-4c06-9365-2240b94f11cb"} : dispatch
Dec  4 05:15:14 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/1854585033' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2ee6d319-dca2-4c06-9365-2240b94f11cb"}]': finished
Dec  4 05:15:14 np0005545273 lvm[84346]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:15:14 np0005545273 lvm[84346]: VG ceph_vg2 finished
Dec  4 05:15:14 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Dec  4 05:15:14 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Dec  4 05:15:14 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec  4 05:15:14 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec  4 05:15:14 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Dec  4 05:15:14 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  4 05:15:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec  4 05:15:14 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/450039958' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec  4 05:15:14 np0005545273 exciting_elgamal[82356]: stderr: got monmap epoch 1
Dec  4 05:15:15 np0005545273 exciting_elgamal[82356]: --> Creating keyring file for osd.2
Dec  4 05:15:15 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Dec  4 05:15:15 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Dec  4 05:15:15 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 2ee6d319-dca2-4c06-9365-2240b94f11cb --setuser ceph --setgroup ceph
Dec  4 05:15:15 np0005545273 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  4 05:15:16 np0005545273 exciting_elgamal[82356]: stderr: 2025-12-04T10:15:15.158+0000 7f52df8678c0 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) No valid bdev label found
Dec  4 05:15:16 np0005545273 exciting_elgamal[82356]: stderr: 2025-12-04T10:15:15.182+0000 7f52df8678c0 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Dec  4 05:15:16 np0005545273 exciting_elgamal[82356]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Dec  4 05:15:16 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  4 05:15:16 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Dec  4 05:15:16 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec  4 05:15:16 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Dec  4 05:15:16 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec  4 05:15:16 np0005545273 exciting_elgamal[82356]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  4 05:15:16 np0005545273 exciting_elgamal[82356]: --> ceph-volume lvm activate successful for osd ID: 2
Dec  4 05:15:16 np0005545273 exciting_elgamal[82356]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Dec  4 05:15:16 np0005545273 systemd[1]: libpod-e83e9a63b81a8e47477d5455ddd2fe7d9bbabdce07dbe66bf88b61cc35d67c5e.scope: Deactivated successfully.
Dec  4 05:15:16 np0005545273 systemd[1]: libpod-e83e9a63b81a8e47477d5455ddd2fe7d9bbabdce07dbe66bf88b61cc35d67c5e.scope: Consumed 6.915s CPU time.
Dec  4 05:15:16 np0005545273 podman[82340]: 2025-12-04 10:15:16.335834877 +0000 UTC m=+17.554005680 container died e83e9a63b81a8e47477d5455ddd2fe7d9bbabdce07dbe66bf88b61cc35d67c5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_elgamal, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:15:16 np0005545273 systemd[1]: var-lib-containers-storage-overlay-ccddb484bf17bdb46fc418e393252390b6dd8f3f6d8e7f7ec57b64c60063c7c5-merged.mount: Deactivated successfully.
Dec  4 05:15:16 np0005545273 podman[82340]: 2025-12-04 10:15:16.399284155 +0000 UTC m=+17.617454928 container remove e83e9a63b81a8e47477d5455ddd2fe7d9bbabdce07dbe66bf88b61cc35d67c5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_elgamal, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec  4 05:15:16 np0005545273 systemd[1]: libpod-conmon-e83e9a63b81a8e47477d5455ddd2fe7d9bbabdce07dbe66bf88b61cc35d67c5e.scope: Deactivated successfully.
Dec  4 05:15:16 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  4 05:15:16 np0005545273 podman[85339]: 2025-12-04 10:15:16.873369848 +0000 UTC m=+0.042444247 container create a040d395e7d4d4ac442fd112c42f4fa935393cc2f4f65e2471816e81af3f9e2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wilson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec  4 05:15:16 np0005545273 systemd[1]: Started libpod-conmon-a040d395e7d4d4ac442fd112c42f4fa935393cc2f4f65e2471816e81af3f9e2b.scope.
Dec  4 05:15:16 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:15:16 np0005545273 podman[85339]: 2025-12-04 10:15:16.855710617 +0000 UTC m=+0.024785066 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:15:16 np0005545273 podman[85339]: 2025-12-04 10:15:16.952940181 +0000 UTC m=+0.122014600 container init a040d395e7d4d4ac442fd112c42f4fa935393cc2f4f65e2471816e81af3f9e2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wilson, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:15:16 np0005545273 podman[85339]: 2025-12-04 10:15:16.961007318 +0000 UTC m=+0.130081727 container start a040d395e7d4d4ac442fd112c42f4fa935393cc2f4f65e2471816e81af3f9e2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:15:16 np0005545273 podman[85339]: 2025-12-04 10:15:16.964031601 +0000 UTC m=+0.133106030 container attach a040d395e7d4d4ac442fd112c42f4fa935393cc2f4f65e2471816e81af3f9e2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec  4 05:15:16 np0005545273 optimistic_wilson[85355]: 167 167
Dec  4 05:15:16 np0005545273 systemd[1]: libpod-a040d395e7d4d4ac442fd112c42f4fa935393cc2f4f65e2471816e81af3f9e2b.scope: Deactivated successfully.
Dec  4 05:15:16 np0005545273 podman[85339]: 2025-12-04 10:15:16.967085746 +0000 UTC m=+0.136160155 container died a040d395e7d4d4ac442fd112c42f4fa935393cc2f4f65e2471816e81af3f9e2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wilson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:15:16 np0005545273 systemd[1]: var-lib-containers-storage-overlay-91d3a6be2c62225d64dd6b079831ffbf84120df8cb6174a149fdf0a4c5c67fef-merged.mount: Deactivated successfully.
Dec  4 05:15:17 np0005545273 podman[85339]: 2025-12-04 10:15:17.002342756 +0000 UTC m=+0.171417195 container remove a040d395e7d4d4ac442fd112c42f4fa935393cc2f4f65e2471816e81af3f9e2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wilson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:15:17 np0005545273 systemd[1]: libpod-conmon-a040d395e7d4d4ac442fd112c42f4fa935393cc2f4f65e2471816e81af3f9e2b.scope: Deactivated successfully.
Dec  4 05:15:17 np0005545273 podman[85379]: 2025-12-04 10:15:17.220695806 +0000 UTC m=+0.066901814 container create e763034d62f7ab59b89d7f0bdfbd38060cf4c3f1c6e390f96c4db7544efa8e6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ishizaka, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:15:17 np0005545273 systemd[1]: Started libpod-conmon-e763034d62f7ab59b89d7f0bdfbd38060cf4c3f1c6e390f96c4db7544efa8e6d.scope.
Dec  4 05:15:17 np0005545273 podman[85379]: 2025-12-04 10:15:17.192895257 +0000 UTC m=+0.039101305 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:15:17 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:15:17 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e286e83880d01123fc7e77e61aa80853da3225e6f371ed3a78b031df9f9be39f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:17 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e286e83880d01123fc7e77e61aa80853da3225e6f371ed3a78b031df9f9be39f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:17 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e286e83880d01123fc7e77e61aa80853da3225e6f371ed3a78b031df9f9be39f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:17 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e286e83880d01123fc7e77e61aa80853da3225e6f371ed3a78b031df9f9be39f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:17 np0005545273 podman[85379]: 2025-12-04 10:15:17.327312339 +0000 UTC m=+0.173518387 container init e763034d62f7ab59b89d7f0bdfbd38060cf4c3f1c6e390f96c4db7544efa8e6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ishizaka, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  4 05:15:17 np0005545273 podman[85379]: 2025-12-04 10:15:17.334960146 +0000 UTC m=+0.181166144 container start e763034d62f7ab59b89d7f0bdfbd38060cf4c3f1c6e390f96c4db7544efa8e6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ishizaka, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec  4 05:15:17 np0005545273 podman[85379]: 2025-12-04 10:15:17.338388379 +0000 UTC m=+0.184594377 container attach e763034d62f7ab59b89d7f0bdfbd38060cf4c3f1c6e390f96c4db7544efa8e6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ishizaka, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]: {
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:    "0": [
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:        {
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            "devices": [
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "/dev/loop3"
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            ],
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            "lv_name": "ceph_lv0",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            "lv_size": "21470642176",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            "name": "ceph_lv0",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            "tags": {
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.cluster_name": "ceph",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.crush_device_class": "",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.encrypted": "0",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.objectstore": "bluestore",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.osd_id": "0",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.type": "block",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.vdo": "0",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.with_tpm": "0"
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            },
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            "type": "block",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            "vg_name": "ceph_vg0"
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:        }
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:    ],
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:    "1": [
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:        {
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            "devices": [
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "/dev/loop4"
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            ],
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            "lv_name": "ceph_lv1",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            "lv_size": "21470642176",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            "name": "ceph_lv1",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            "tags": {
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.cluster_name": "ceph",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.crush_device_class": "",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.encrypted": "0",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.objectstore": "bluestore",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.osd_id": "1",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.type": "block",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.vdo": "0",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.with_tpm": "0"
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            },
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            "type": "block",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            "vg_name": "ceph_vg1"
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:        }
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:    ],
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:    "2": [
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:        {
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            "devices": [
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "/dev/loop5"
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            ],
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            "lv_name": "ceph_lv2",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            "lv_size": "21470642176",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            "name": "ceph_lv2",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            "tags": {
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.cluster_name": "ceph",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.crush_device_class": "",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.encrypted": "0",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.objectstore": "bluestore",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.osd_id": "2",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.type": "block",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.vdo": "0",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:                "ceph.with_tpm": "0"
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            },
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            "type": "block",
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:            "vg_name": "ceph_vg2"
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:        }
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]:    ]
Dec  4 05:15:17 np0005545273 nice_ishizaka[85395]: }
Dec  4 05:15:17 np0005545273 systemd[1]: libpod-e763034d62f7ab59b89d7f0bdfbd38060cf4c3f1c6e390f96c4db7544efa8e6d.scope: Deactivated successfully.
Dec  4 05:15:17 np0005545273 podman[85379]: 2025-12-04 10:15:17.648025707 +0000 UTC m=+0.494231675 container died e763034d62f7ab59b89d7f0bdfbd38060cf4c3f1c6e390f96c4db7544efa8e6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ishizaka, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  4 05:15:17 np0005545273 systemd[1]: var-lib-containers-storage-overlay-e286e83880d01123fc7e77e61aa80853da3225e6f371ed3a78b031df9f9be39f-merged.mount: Deactivated successfully.
Dec  4 05:15:17 np0005545273 podman[85379]: 2025-12-04 10:15:17.695768212 +0000 UTC m=+0.541974180 container remove e763034d62f7ab59b89d7f0bdfbd38060cf4c3f1c6e390f96c4db7544efa8e6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ishizaka, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec  4 05:15:17 np0005545273 systemd[1]: libpod-conmon-e763034d62f7ab59b89d7f0bdfbd38060cf4c3f1c6e390f96c4db7544efa8e6d.scope: Deactivated successfully.
Dec  4 05:15:17 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Dec  4 05:15:17 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec  4 05:15:17 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:15:17 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:15:17 np0005545273 ceph-mgr[75651]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Dec  4 05:15:17 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Dec  4 05:15:17 np0005545273 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  4 05:15:17 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:15:18 np0005545273 podman[85510]: 2025-12-04 10:15:18.314926917 +0000 UTC m=+0.045262597 container create 60efe658c94ab04bd556fe8b23b6e8b0b6ea04a19ef39526b15b264aefe407c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_williams, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:15:18 np0005545273 systemd[1]: Started libpod-conmon-60efe658c94ab04bd556fe8b23b6e8b0b6ea04a19ef39526b15b264aefe407c2.scope.
Dec  4 05:15:18 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:15:18 np0005545273 podman[85510]: 2025-12-04 10:15:18.29664199 +0000 UTC m=+0.026977690 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:15:18 np0005545273 podman[85510]: 2025-12-04 10:15:18.404990395 +0000 UTC m=+0.135326095 container init 60efe658c94ab04bd556fe8b23b6e8b0b6ea04a19ef39526b15b264aefe407c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_williams, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  4 05:15:18 np0005545273 podman[85510]: 2025-12-04 10:15:18.412897358 +0000 UTC m=+0.143233038 container start 60efe658c94ab04bd556fe8b23b6e8b0b6ea04a19ef39526b15b264aefe407c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec  4 05:15:18 np0005545273 thirsty_williams[85526]: 167 167
Dec  4 05:15:18 np0005545273 systemd[1]: libpod-60efe658c94ab04bd556fe8b23b6e8b0b6ea04a19ef39526b15b264aefe407c2.scope: Deactivated successfully.
Dec  4 05:15:18 np0005545273 podman[85510]: 2025-12-04 10:15:18.418224438 +0000 UTC m=+0.148560138 container attach 60efe658c94ab04bd556fe8b23b6e8b0b6ea04a19ef39526b15b264aefe407c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_williams, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:15:18 np0005545273 podman[85510]: 2025-12-04 10:15:18.419364795 +0000 UTC m=+0.149700495 container died 60efe658c94ab04bd556fe8b23b6e8b0b6ea04a19ef39526b15b264aefe407c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec  4 05:15:18 np0005545273 systemd[1]: var-lib-containers-storage-overlay-7832cf96527604160e077b24b0f83679ad13f3fd63ea300004bd604ea38b7d95-merged.mount: Deactivated successfully.
Dec  4 05:15:18 np0005545273 podman[85510]: 2025-12-04 10:15:18.460292034 +0000 UTC m=+0.190627714 container remove 60efe658c94ab04bd556fe8b23b6e8b0b6ea04a19ef39526b15b264aefe407c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_williams, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:15:18 np0005545273 systemd[1]: libpod-conmon-60efe658c94ab04bd556fe8b23b6e8b0b6ea04a19ef39526b15b264aefe407c2.scope: Deactivated successfully.
Dec  4 05:15:18 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  4 05:15:18 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec  4 05:15:18 np0005545273 podman[85556]: 2025-12-04 10:15:18.804831454 +0000 UTC m=+0.027440470 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:15:18 np0005545273 podman[85556]: 2025-12-04 10:15:18.956081366 +0000 UTC m=+0.178690292 container create 7c5a6b30ef98f31602ecfaf62d8d2ff3b5d38c20db22bee27066140ea9587074 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec  4 05:15:19 np0005545273 systemd[1]: Started libpod-conmon-7c5a6b30ef98f31602ecfaf62d8d2ff3b5d38c20db22bee27066140ea9587074.scope.
Dec  4 05:15:19 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:15:19 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d19f5491b5a7244811f3c03681c6a708db27b7e346348b1e5f7c8005c27b9f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:19 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d19f5491b5a7244811f3c03681c6a708db27b7e346348b1e5f7c8005c27b9f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:19 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d19f5491b5a7244811f3c03681c6a708db27b7e346348b1e5f7c8005c27b9f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:19 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d19f5491b5a7244811f3c03681c6a708db27b7e346348b1e5f7c8005c27b9f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:19 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d19f5491b5a7244811f3c03681c6a708db27b7e346348b1e5f7c8005c27b9f2/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:19 np0005545273 podman[85556]: 2025-12-04 10:15:19.078087535 +0000 UTC m=+0.300696491 container init 7c5a6b30ef98f31602ecfaf62d8d2ff3b5d38c20db22bee27066140ea9587074 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate-test, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  4 05:15:19 np0005545273 podman[85556]: 2025-12-04 10:15:19.084936433 +0000 UTC m=+0.307545389 container start 7c5a6b30ef98f31602ecfaf62d8d2ff3b5d38c20db22bee27066140ea9587074 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate-test, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec  4 05:15:19 np0005545273 podman[85556]: 2025-12-04 10:15:19.08894711 +0000 UTC m=+0.311556056 container attach 7c5a6b30ef98f31602ecfaf62d8d2ff3b5d38c20db22bee27066140ea9587074 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate-test, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:15:19 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate-test[85572]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec  4 05:15:19 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate-test[85572]:                            [--no-systemd] [--no-tmpfs]
Dec  4 05:15:19 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate-test[85572]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec  4 05:15:19 np0005545273 systemd[1]: libpod-7c5a6b30ef98f31602ecfaf62d8d2ff3b5d38c20db22bee27066140ea9587074.scope: Deactivated successfully.
Dec  4 05:15:19 np0005545273 podman[85556]: 2025-12-04 10:15:19.28762576 +0000 UTC m=+0.510234726 container died 7c5a6b30ef98f31602ecfaf62d8d2ff3b5d38c20db22bee27066140ea9587074 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate-test, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  4 05:15:19 np0005545273 systemd[1]: var-lib-containers-storage-overlay-4d19f5491b5a7244811f3c03681c6a708db27b7e346348b1e5f7c8005c27b9f2-merged.mount: Deactivated successfully.
Dec  4 05:15:19 np0005545273 podman[85556]: 2025-12-04 10:15:19.378204231 +0000 UTC m=+0.600813167 container remove 7c5a6b30ef98f31602ecfaf62d8d2ff3b5d38c20db22bee27066140ea9587074 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate-test, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:15:19 np0005545273 systemd[1]: libpod-conmon-7c5a6b30ef98f31602ecfaf62d8d2ff3b5d38c20db22bee27066140ea9587074.scope: Deactivated successfully.
Dec  4 05:15:19 np0005545273 ceph-mon[75358]: Deploying daemon osd.0 on compute-0
Dec  4 05:15:19 np0005545273 systemd[1]: Reloading.
Dec  4 05:15:19 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:15:19 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:15:19 np0005545273 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  4 05:15:20 np0005545273 systemd[1]: Reloading.
Dec  4 05:15:20 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:15:20 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:15:20 np0005545273 systemd[1]: Starting Ceph osd.0 for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d...
Dec  4 05:15:20 np0005545273 podman[85733]: 2025-12-04 10:15:20.586504405 +0000 UTC m=+0.057093345 container create 70a123ec28e72812a4f18d927957444da92ce6a19cb65189adfb5a7c17b7620e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  4 05:15:20 np0005545273 podman[85733]: 2025-12-04 10:15:20.566574949 +0000 UTC m=+0.037163929 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:15:20 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  4 05:15:20 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:15:20 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d0fba03c8d0c7930e9d0bc73c08a334e397d287628440e907bdb5c6be559110/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:20 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d0fba03c8d0c7930e9d0bc73c08a334e397d287628440e907bdb5c6be559110/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:20 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d0fba03c8d0c7930e9d0bc73c08a334e397d287628440e907bdb5c6be559110/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:20 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d0fba03c8d0c7930e9d0bc73c08a334e397d287628440e907bdb5c6be559110/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:20 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d0fba03c8d0c7930e9d0bc73c08a334e397d287628440e907bdb5c6be559110/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:20 np0005545273 podman[85733]: 2025-12-04 10:15:20.735089152 +0000 UTC m=+0.205678112 container init 70a123ec28e72812a4f18d927957444da92ce6a19cb65189adfb5a7c17b7620e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec  4 05:15:20 np0005545273 podman[85733]: 2025-12-04 10:15:20.740153206 +0000 UTC m=+0.210742146 container start 70a123ec28e72812a4f18d927957444da92ce6a19cb65189adfb5a7c17b7620e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  4 05:15:20 np0005545273 podman[85733]: 2025-12-04 10:15:20.781743291 +0000 UTC m=+0.252332261 container attach 70a123ec28e72812a4f18d927957444da92ce6a19cb65189adfb5a7c17b7620e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Dec  4 05:15:20 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate[85749]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  4 05:15:20 np0005545273 bash[85733]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  4 05:15:20 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate[85749]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  4 05:15:20 np0005545273 bash[85733]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  4 05:15:21 np0005545273 lvm[85835]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:15:21 np0005545273 lvm[85835]: VG ceph_vg0 finished
Dec  4 05:15:21 np0005545273 lvm[85836]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:15:21 np0005545273 lvm[85836]: VG ceph_vg1 finished
Dec  4 05:15:21 np0005545273 lvm[85838]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:15:21 np0005545273 lvm[85838]: VG ceph_vg2 finished
Dec  4 05:15:21 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate[85749]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec  4 05:15:21 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate[85749]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  4 05:15:21 np0005545273 bash[85733]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec  4 05:15:21 np0005545273 bash[85733]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  4 05:15:21 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate[85749]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  4 05:15:21 np0005545273 bash[85733]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  4 05:15:21 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate[85749]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  4 05:15:21 np0005545273 bash[85733]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  4 05:15:21 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate[85749]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec  4 05:15:21 np0005545273 bash[85733]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec  4 05:15:21 np0005545273 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  4 05:15:21 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate[85749]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec  4 05:15:21 np0005545273 bash[85733]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec  4 05:15:21 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate[85749]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec  4 05:15:21 np0005545273 bash[85733]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec  4 05:15:21 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate[85749]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  4 05:15:21 np0005545273 bash[85733]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  4 05:15:21 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate[85749]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  4 05:15:21 np0005545273 bash[85733]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  4 05:15:21 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate[85749]: --> ceph-volume lvm activate successful for osd ID: 0
Dec  4 05:15:21 np0005545273 bash[85733]: --> ceph-volume lvm activate successful for osd ID: 0
Dec  4 05:15:21 np0005545273 systemd[1]: libpod-70a123ec28e72812a4f18d927957444da92ce6a19cb65189adfb5a7c17b7620e.scope: Deactivated successfully.
Dec  4 05:15:21 np0005545273 systemd[1]: libpod-70a123ec28e72812a4f18d927957444da92ce6a19cb65189adfb5a7c17b7620e.scope: Consumed 1.708s CPU time.
Dec  4 05:15:21 np0005545273 conmon[85749]: conmon 70a123ec28e72812a4f1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-70a123ec28e72812a4f18d927957444da92ce6a19cb65189adfb5a7c17b7620e.scope/container/memory.events
Dec  4 05:15:21 np0005545273 podman[85733]: 2025-12-04 10:15:21.950705455 +0000 UTC m=+1.421294445 container died 70a123ec28e72812a4f18d927957444da92ce6a19cb65189adfb5a7c17b7620e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:15:21 np0005545273 systemd[1]: var-lib-containers-storage-overlay-0d0fba03c8d0c7930e9d0bc73c08a334e397d287628440e907bdb5c6be559110-merged.mount: Deactivated successfully.
Dec  4 05:15:22 np0005545273 podman[85733]: 2025-12-04 10:15:22.014039642 +0000 UTC m=+1.484628582 container remove 70a123ec28e72812a4f18d927957444da92ce6a19cb65189adfb5a7c17b7620e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0-activate, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:15:22 np0005545273 podman[86000]: 2025-12-04 10:15:22.26142786 +0000 UTC m=+0.047496090 container create f4a07ff696942e750f7c85c5375dea5220ff8c39e9eccf82a7d5cabc76e6f733 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle)
Dec  4 05:15:22 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e463c843ef058c09913f0c2dc05446c98588d66a64be843b3ddf98a680324978/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:22 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e463c843ef058c09913f0c2dc05446c98588d66a64be843b3ddf98a680324978/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:22 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e463c843ef058c09913f0c2dc05446c98588d66a64be843b3ddf98a680324978/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:22 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e463c843ef058c09913f0c2dc05446c98588d66a64be843b3ddf98a680324978/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:22 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e463c843ef058c09913f0c2dc05446c98588d66a64be843b3ddf98a680324978/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:22 np0005545273 podman[86000]: 2025-12-04 10:15:22.236650625 +0000 UTC m=+0.022718895 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:15:22 np0005545273 podman[86000]: 2025-12-04 10:15:22.344758265 +0000 UTC m=+0.130826515 container init f4a07ff696942e750f7c85c5375dea5220ff8c39e9eccf82a7d5cabc76e6f733 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec  4 05:15:22 np0005545273 podman[86000]: 2025-12-04 10:15:22.361876142 +0000 UTC m=+0.147944402 container start f4a07ff696942e750f7c85c5375dea5220ff8c39e9eccf82a7d5cabc76e6f733 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:15:22 np0005545273 bash[86000]: f4a07ff696942e750f7c85c5375dea5220ff8c39e9eccf82a7d5cabc76e6f733
Dec  4 05:15:22 np0005545273 systemd[1]: Started Ceph osd.0 for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d.
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: set uid:gid to 167:167 (ceph:ceph)
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: pidfile_write: ignore empty --pid-file
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) close
Dec  4 05:15:22 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:15:22 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:22 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:15:22 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:22 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Dec  4 05:15:22 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec  4 05:15:22 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:15:22 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:15:22 np0005545273 ceph-mgr[75651]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Dec  4 05:15:22 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) close
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) close
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) close
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) close
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150e400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150e400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150e400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150e400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150e400 /var/lib/ceph/osd/ceph-0/block) close
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150e000 /var/lib/ceph/osd/ceph-0/block) close
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: load: jerasure load: lrc 
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) close
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) close
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) close
Dec  4 05:15:22 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) close
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) close
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x56116150fc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x5611621a5800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x5611621a5800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x5611621a5800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x5611621a5800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bluefs mount
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bluefs mount shared_bdev_used = 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: RocksDB version: 7.9.2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Git sha 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Compile date 2025-10-30 15:42:43
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: DB SUMMARY
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: DB Session ID:  PFQFCW5ZC5JN7BO8U6AB
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: CURRENT file:  CURRENT
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: IDENTITY file:  IDENTITY
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                         Options.error_if_exists: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                       Options.create_if_missing: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                         Options.paranoid_checks: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                                     Options.env: 0x56116139fea0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                                Options.info_log: 0x5611623f08a0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.max_file_opening_threads: 16
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                              Options.statistics: (nil)
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                               Options.use_fsync: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                       Options.max_log_file_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                         Options.allow_fallocate: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.use_direct_reads: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.create_missing_column_families: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                              Options.db_log_dir: 
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                                 Options.wal_dir: db.wal
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.advise_random_on_open: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.write_buffer_manager: 0x561161404b40
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                            Options.rate_limiter: (nil)
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.unordered_write: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                               Options.row_cache: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                              Options.wal_filter: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.allow_ingest_behind: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.two_write_queues: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.manual_wal_flush: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.wal_compression: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.atomic_flush: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                 Options.log_readahead_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.allow_data_in_errors: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.db_host_id: __hostname__
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.max_background_jobs: 4
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.max_background_compactions: -1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.max_subcompactions: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.max_open_files: -1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.bytes_per_sync: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.max_background_flushes: -1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Compression algorithms supported:
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: #011kZSTD supported: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: #011kXpressCompression supported: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: #011kBZip2Compression supported: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: #011kLZ4Compression supported: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: #011kZlibCompression supported: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: #011kSnappyCompression supported: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f0c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5611613a38d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f0c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5611613a38d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f0c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5611613a38d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f0c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5611613a38d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f0c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5611613a38d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f0c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5611613a38d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f0c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5611613a38d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f0c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5611613a3a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f0c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5611613a3a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f0c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5611613a3a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ef8b57b4-a295-48e7-9e30-b2d54314d54d
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843322772587, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843322774798, "job": 1, "event": "recovery_finished"}
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: freelist init
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: freelist _read_cfg
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bluefs umount
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x5611621a5800 /var/lib/ceph/osd/ceph-0/block) close
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x5611621a5800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x5611621a5800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x5611621a5800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bdev(0x5611621a5800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bluefs mount
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bluefs mount shared_bdev_used = 27262976
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: RocksDB version: 7.9.2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Git sha 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Compile date 2025-10-30 15:42:43
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: DB SUMMARY
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: DB Session ID:  PFQFCW5ZC5JN7BO8U6AA
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: CURRENT file:  CURRENT
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: IDENTITY file:  IDENTITY
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                         Options.error_if_exists: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                       Options.create_if_missing: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                         Options.paranoid_checks: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                                     Options.env: 0x5611625c0a10
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                                Options.info_log: 0x5611623f0a20
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.max_file_opening_threads: 16
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                              Options.statistics: (nil)
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                               Options.use_fsync: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                       Options.max_log_file_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                         Options.allow_fallocate: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.use_direct_reads: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.create_missing_column_families: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                              Options.db_log_dir: 
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                                 Options.wal_dir: db.wal
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.advise_random_on_open: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.write_buffer_manager: 0x561161405900
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                            Options.rate_limiter: (nil)
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.unordered_write: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                               Options.row_cache: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                              Options.wal_filter: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.allow_ingest_behind: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.two_write_queues: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.manual_wal_flush: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.wal_compression: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.atomic_flush: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                 Options.log_readahead_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.allow_data_in_errors: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.db_host_id: __hostname__
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.max_background_jobs: 4
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.max_background_compactions: -1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.max_subcompactions: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.max_open_files: -1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.bytes_per_sync: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.max_background_flushes: -1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Compression algorithms supported:
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: #011kZSTD supported: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: #011kXpressCompression supported: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: #011kBZip2Compression supported: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: #011kLZ4Compression supported: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: #011kZlibCompression supported: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: #011kSnappyCompression supported: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f0bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5611613a38d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f0bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5611613a38d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f0bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5611613a38d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f0bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5611613a38d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f0bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5611613a38d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f0bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5611613a38d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f0bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5611613a38d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f10c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5611613a3a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f10c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5611613a3a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5611623f10c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5611613a3a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ef8b57b4-a295-48e7-9e30-b2d54314d54d
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843322828900, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843322836492, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843322, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ef8b57b4-a295-48e7-9e30-b2d54314d54d", "db_session_id": "PFQFCW5ZC5JN7BO8U6AA", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843322839888, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843322, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ef8b57b4-a295-48e7-9e30-b2d54314d54d", "db_session_id": "PFQFCW5ZC5JN7BO8U6AA", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843322842858, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843322, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ef8b57b4-a295-48e7-9e30-b2d54314d54d", "db_session_id": "PFQFCW5ZC5JN7BO8U6AA", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843322844908, "job": 1, "event": "recovery_finished"}
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56116260a000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: DB pointer 0x5611625aa000
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 460.80 MB usag
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: _get_class not permitted to load lua
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: _get_class not permitted to load sdk
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: osd.0 0 load_pgs
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: osd.0 0 load_pgs opened 0 pgs
Dec  4 05:15:22 np0005545273 ceph-osd[86021]: osd.0 0 log_to_monitors true
Dec  4 05:15:22 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0[86017]: 2025-12-04T10:15:22.885+0000 7f8fb959a8c0 -1 osd.0 0 log_to_monitors true
Dec  4 05:15:22 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Dec  4 05:15:22 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/371201250,v1:192.168.122.100:6803/371201250]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Dec  4 05:15:22 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:15:23 np0005545273 podman[86557]: 2025-12-04 10:15:23.004669113 +0000 UTC m=+0.043847511 container create be5bb0cc4e07a59c011d09c018264997b8f5bbd6a8b48725a3fa421feb45e7f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_brown, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  4 05:15:23 np0005545273 systemd[1]: Started libpod-conmon-be5bb0cc4e07a59c011d09c018264997b8f5bbd6a8b48725a3fa421feb45e7f1.scope.
Dec  4 05:15:23 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:15:23 np0005545273 podman[86557]: 2025-12-04 10:15:22.985445613 +0000 UTC m=+0.024624021 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:15:23 np0005545273 podman[86557]: 2025-12-04 10:15:23.081437227 +0000 UTC m=+0.120615625 container init be5bb0cc4e07a59c011d09c018264997b8f5bbd6a8b48725a3fa421feb45e7f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_brown, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:15:23 np0005545273 podman[86557]: 2025-12-04 10:15:23.089163865 +0000 UTC m=+0.128342243 container start be5bb0cc4e07a59c011d09c018264997b8f5bbd6a8b48725a3fa421feb45e7f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_brown, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec  4 05:15:23 np0005545273 podman[86557]: 2025-12-04 10:15:23.092051055 +0000 UTC m=+0.131229433 container attach be5bb0cc4e07a59c011d09c018264997b8f5bbd6a8b48725a3fa421feb45e7f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_brown, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec  4 05:15:23 np0005545273 nervous_brown[86573]: 167 167
Dec  4 05:15:23 np0005545273 systemd[1]: libpod-be5bb0cc4e07a59c011d09c018264997b8f5bbd6a8b48725a3fa421feb45e7f1.scope: Deactivated successfully.
Dec  4 05:15:23 np0005545273 podman[86557]: 2025-12-04 10:15:23.097094119 +0000 UTC m=+0.136272537 container died be5bb0cc4e07a59c011d09c018264997b8f5bbd6a8b48725a3fa421feb45e7f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:15:23 np0005545273 systemd[1]: var-lib-containers-storage-overlay-27fa8b5f9710f91c0cd172c50191d1413db68e541b0dfa2865aa2a469c9f2b05-merged.mount: Deactivated successfully.
Dec  4 05:15:23 np0005545273 podman[86557]: 2025-12-04 10:15:23.139389971 +0000 UTC m=+0.178568349 container remove be5bb0cc4e07a59c011d09c018264997b8f5bbd6a8b48725a3fa421feb45e7f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:15:23 np0005545273 systemd[1]: libpod-conmon-be5bb0cc4e07a59c011d09c018264997b8f5bbd6a8b48725a3fa421feb45e7f1.scope: Deactivated successfully.
Dec  4 05:15:23 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:23 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:23 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec  4 05:15:23 np0005545273 ceph-mon[75358]: Deploying daemon osd.1 on compute-0
Dec  4 05:15:23 np0005545273 ceph-mon[75358]: from='osd.0 [v2:192.168.122.100:6802/371201250,v1:192.168.122.100:6803/371201250]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Dec  4 05:15:23 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Dec  4 05:15:23 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  4 05:15:23 np0005545273 podman[86602]: 2025-12-04 10:15:23.442805348 +0000 UTC m=+0.058902769 container create 246fa7f6193d697bf26062db450b674bdb30e1aaf97042f2bcc892a19cfab7ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate-test, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec  4 05:15:23 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/371201250,v1:192.168.122.100:6803/371201250]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec  4 05:15:23 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Dec  4 05:15:23 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Dec  4 05:15:23 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Dec  4 05:15:23 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/371201250,v1:192.168.122.100:6803/371201250]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec  4 05:15:23 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.02 at location {host=compute-0,root=default}
Dec  4 05:15:23 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  4 05:15:23 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec  4 05:15:23 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  4 05:15:23 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec  4 05:15:23 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  4 05:15:23 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  4 05:15:23 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec  4 05:15:23 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  4 05:15:23 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  4 05:15:23 np0005545273 systemd[1]: Started libpod-conmon-246fa7f6193d697bf26062db450b674bdb30e1aaf97042f2bcc892a19cfab7ab.scope.
Dec  4 05:15:23 np0005545273 podman[86602]: 2025-12-04 10:15:23.424919581 +0000 UTC m=+0.041017032 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:15:23 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:15:23 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7009b2c5613991ec0c976caa45034f12902084a2e2c71180f60da7660d559320/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:23 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7009b2c5613991ec0c976caa45034f12902084a2e2c71180f60da7660d559320/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:23 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7009b2c5613991ec0c976caa45034f12902084a2e2c71180f60da7660d559320/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:23 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7009b2c5613991ec0c976caa45034f12902084a2e2c71180f60da7660d559320/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:23 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7009b2c5613991ec0c976caa45034f12902084a2e2c71180f60da7660d559320/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:23 np0005545273 podman[86602]: 2025-12-04 10:15:23.572787351 +0000 UTC m=+0.188884862 container init 246fa7f6193d697bf26062db450b674bdb30e1aaf97042f2bcc892a19cfab7ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate-test, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:15:23 np0005545273 podman[86602]: 2025-12-04 10:15:23.584549047 +0000 UTC m=+0.200646508 container start 246fa7f6193d697bf26062db450b674bdb30e1aaf97042f2bcc892a19cfab7ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:15:23 np0005545273 podman[86602]: 2025-12-04 10:15:23.588536525 +0000 UTC m=+0.204633966 container attach 246fa7f6193d697bf26062db450b674bdb30e1aaf97042f2bcc892a19cfab7ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate-test, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle)
Dec  4 05:15:23 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate-test[86618]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec  4 05:15:23 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate-test[86618]:                            [--no-systemd] [--no-tmpfs]
Dec  4 05:15:23 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate-test[86618]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec  4 05:15:23 np0005545273 systemd[1]: libpod-246fa7f6193d697bf26062db450b674bdb30e1aaf97042f2bcc892a19cfab7ab.scope: Deactivated successfully.
Dec  4 05:15:23 np0005545273 podman[86602]: 2025-12-04 10:15:23.796004129 +0000 UTC m=+0.412101560 container died 246fa7f6193d697bf26062db450b674bdb30e1aaf97042f2bcc892a19cfab7ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate-test, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:15:23 np0005545273 systemd[1]: var-lib-containers-storage-overlay-7009b2c5613991ec0c976caa45034f12902084a2e2c71180f60da7660d559320-merged.mount: Deactivated successfully.
Dec  4 05:15:23 np0005545273 podman[86602]: 2025-12-04 10:15:23.846938983 +0000 UTC m=+0.463036414 container remove 246fa7f6193d697bf26062db450b674bdb30e1aaf97042f2bcc892a19cfab7ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate-test, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Dec  4 05:15:23 np0005545273 systemd[1]: libpod-conmon-246fa7f6193d697bf26062db450b674bdb30e1aaf97042f2bcc892a19cfab7ab.scope: Deactivated successfully.
Dec  4 05:15:23 np0005545273 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  4 05:15:23 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec  4 05:15:23 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec  4 05:15:24 np0005545273 systemd[1]: Reloading.
Dec  4 05:15:24 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:15:24 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:15:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Dec  4 05:15:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  4 05:15:24 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/371201250,v1:192.168.122.100:6803/371201250]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  4 05:15:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Dec  4 05:15:24 np0005545273 ceph-osd[86021]: osd.0 0 done with init, starting boot process
Dec  4 05:15:24 np0005545273 ceph-osd[86021]: osd.0 0 start_boot
Dec  4 05:15:24 np0005545273 ceph-osd[86021]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec  4 05:15:24 np0005545273 ceph-osd[86021]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec  4 05:15:24 np0005545273 ceph-osd[86021]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec  4 05:15:24 np0005545273 ceph-osd[86021]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec  4 05:15:24 np0005545273 ceph-osd[86021]: osd.0 0  bench count 12288000 bsize 4 KiB
Dec  4 05:15:24 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Dec  4 05:15:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  4 05:15:24 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec  4 05:15:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  4 05:15:24 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec  4 05:15:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  4 05:15:24 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec  4 05:15:24 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  4 05:15:24 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  4 05:15:24 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  4 05:15:24 np0005545273 ceph-mon[75358]: from='osd.0 [v2:192.168.122.100:6802/371201250,v1:192.168.122.100:6803/371201250]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec  4 05:15:24 np0005545273 ceph-mon[75358]: from='osd.0 [v2:192.168.122.100:6802/371201250,v1:192.168.122.100:6803/371201250]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec  4 05:15:24 np0005545273 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/371201250; not ready for session (expect reconnect)
Dec  4 05:15:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  4 05:15:24 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec  4 05:15:24 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  4 05:15:24 np0005545273 systemd[1]: Reloading.
Dec  4 05:15:24 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:15:24 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:15:24 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  4 05:15:24 np0005545273 systemd[1]: Starting Ceph osd.1 for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d...
Dec  4 05:15:25 np0005545273 podman[86778]: 2025-12-04 10:15:25.170370798 +0000 UTC m=+0.072564543 container create 1966104ba3f65114f9fdcbc741bc6cae35a8d614e005a7e420167dd9e838518e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:15:25 np0005545273 podman[86778]: 2025-12-04 10:15:25.138719415 +0000 UTC m=+0.040913170 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:15:25 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:15:25 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb2847b3f98f30bc2f1e35bf2b91e7b7bb1b5f84a22821e472e099cdfa584662/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:25 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb2847b3f98f30bc2f1e35bf2b91e7b7bb1b5f84a22821e472e099cdfa584662/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:25 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb2847b3f98f30bc2f1e35bf2b91e7b7bb1b5f84a22821e472e099cdfa584662/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:25 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb2847b3f98f30bc2f1e35bf2b91e7b7bb1b5f84a22821e472e099cdfa584662/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:25 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb2847b3f98f30bc2f1e35bf2b91e7b7bb1b5f84a22821e472e099cdfa584662/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:25 np0005545273 podman[86778]: 2025-12-04 10:15:25.289899335 +0000 UTC m=+0.192093140 container init 1966104ba3f65114f9fdcbc741bc6cae35a8d614e005a7e420167dd9e838518e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:15:25 np0005545273 podman[86778]: 2025-12-04 10:15:25.297642604 +0000 UTC m=+0.199836369 container start 1966104ba3f65114f9fdcbc741bc6cae35a8d614e005a7e420167dd9e838518e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec  4 05:15:25 np0005545273 podman[86778]: 2025-12-04 10:15:25.310191371 +0000 UTC m=+0.212385136 container attach 1966104ba3f65114f9fdcbc741bc6cae35a8d614e005a7e420167dd9e838518e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  4 05:15:25 np0005545273 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/371201250; not ready for session (expect reconnect)
Dec  4 05:15:25 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  4 05:15:25 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec  4 05:15:25 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  4 05:15:25 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate[86793]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  4 05:15:25 np0005545273 bash[86778]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  4 05:15:25 np0005545273 ceph-mon[75358]: from='osd.0 [v2:192.168.122.100:6802/371201250,v1:192.168.122.100:6803/371201250]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  4 05:15:25 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate[86793]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  4 05:15:25 np0005545273 bash[86778]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  4 05:15:25 np0005545273 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  4 05:15:26 np0005545273 lvm[86876]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:15:26 np0005545273 lvm[86879]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:15:26 np0005545273 lvm[86879]: VG ceph_vg1 finished
Dec  4 05:15:26 np0005545273 lvm[86876]: VG ceph_vg0 finished
Dec  4 05:15:26 np0005545273 lvm[86881]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:15:26 np0005545273 lvm[86881]: VG ceph_vg2 finished
Dec  4 05:15:26 np0005545273 lvm[86882]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:15:26 np0005545273 lvm[86882]: VG ceph_vg1 finished
Dec  4 05:15:26 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate[86793]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec  4 05:15:26 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate[86793]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  4 05:15:26 np0005545273 bash[86778]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec  4 05:15:26 np0005545273 bash[86778]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  4 05:15:26 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate[86793]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  4 05:15:26 np0005545273 bash[86778]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  4 05:15:26 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate[86793]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  4 05:15:26 np0005545273 bash[86778]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  4 05:15:26 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate[86793]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec  4 05:15:26 np0005545273 bash[86778]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec  4 05:15:26 np0005545273 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/371201250; not ready for session (expect reconnect)
Dec  4 05:15:26 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  4 05:15:26 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec  4 05:15:26 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  4 05:15:26 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate[86793]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec  4 05:15:26 np0005545273 bash[86778]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec  4 05:15:26 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate[86793]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec  4 05:15:26 np0005545273 bash[86778]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec  4 05:15:26 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate[86793]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec  4 05:15:26 np0005545273 bash[86778]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec  4 05:15:26 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate[86793]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  4 05:15:26 np0005545273 bash[86778]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  4 05:15:26 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate[86793]: --> ceph-volume lvm activate successful for osd ID: 1
Dec  4 05:15:26 np0005545273 bash[86778]: --> ceph-volume lvm activate successful for osd ID: 1
Dec  4 05:15:26 np0005545273 systemd[1]: libpod-1966104ba3f65114f9fdcbc741bc6cae35a8d614e005a7e420167dd9e838518e.scope: Deactivated successfully.
Dec  4 05:15:26 np0005545273 systemd[1]: libpod-1966104ba3f65114f9fdcbc741bc6cae35a8d614e005a7e420167dd9e838518e.scope: Consumed 1.837s CPU time.
Dec  4 05:15:26 np0005545273 podman[86995]: 2025-12-04 10:15:26.654862794 +0000 UTC m=+0.030989708 container died 1966104ba3f65114f9fdcbc741bc6cae35a8d614e005a7e420167dd9e838518e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:15:26 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  4 05:15:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:15:26
Dec  4 05:15:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:15:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:15:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] No pools available
Dec  4 05:15:26 np0005545273 systemd[1]: var-lib-containers-storage-overlay-fb2847b3f98f30bc2f1e35bf2b91e7b7bb1b5f84a22821e472e099cdfa584662-merged.mount: Deactivated successfully.
Dec  4 05:15:27 np0005545273 podman[86995]: 2025-12-04 10:15:27.1078248 +0000 UTC m=+0.483951704 container remove 1966104ba3f65114f9fdcbc741bc6cae35a8d614e005a7e420167dd9e838518e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:15:27 np0005545273 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/371201250; not ready for session (expect reconnect)
Dec  4 05:15:27 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  4 05:15:27 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec  4 05:15:27 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  4 05:15:27 np0005545273 podman[87051]: 2025-12-04 10:15:27.471412765 +0000 UTC m=+0.112362483 container create f6ca53226c0f28dd275d5613685249253576ebb8e33a5dea7dc71ce5d58c96c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec  4 05:15:27 np0005545273 podman[87051]: 2025-12-04 10:15:27.384612906 +0000 UTC m=+0.025562674 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:15:27 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34d2259eb54538a15d492b7995d9d87fa8b97d8ea9352d55056b426eca3806f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:27 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34d2259eb54538a15d492b7995d9d87fa8b97d8ea9352d55056b426eca3806f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:27 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34d2259eb54538a15d492b7995d9d87fa8b97d8ea9352d55056b426eca3806f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:27 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34d2259eb54538a15d492b7995d9d87fa8b97d8ea9352d55056b426eca3806f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:27 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34d2259eb54538a15d492b7995d9d87fa8b97d8ea9352d55056b426eca3806f2/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:27 np0005545273 podman[87051]: 2025-12-04 10:15:27.581030082 +0000 UTC m=+0.221979820 container init f6ca53226c0f28dd275d5613685249253576ebb8e33a5dea7dc71ce5d58c96c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec  4 05:15:27 np0005545273 podman[87051]: 2025-12-04 10:15:27.589722634 +0000 UTC m=+0.230672312 container start f6ca53226c0f28dd275d5613685249253576ebb8e33a5dea7dc71ce5d58c96c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:15:27 np0005545273 bash[87051]: f6ca53226c0f28dd275d5613685249253576ebb8e33a5dea7dc71ce5d58c96c5
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: set uid:gid to 167:167 (ceph:ceph)
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: pidfile_write: ignore empty --pid-file
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:27 np0005545273 systemd[1]: Started Ceph osd.1 for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d.
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) close
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) close
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) close
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) close
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) close
Dec  4 05:15:27 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005012400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005012400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005012400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005012400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005012400 /var/lib/ceph/osd/ceph-1/block) close
Dec  4 05:15:27 np0005545273 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005012000 /var/lib/ceph/osd/ceph-1/block) close
Dec  4 05:15:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:15:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:15:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:15:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:15:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:15:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:15:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:15:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: load: jerasure load: lrc 
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  4 05:15:27 np0005545273 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) close
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) close
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) close
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) close
Dec  4 05:15:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:15:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) close
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bdev(0x559005013c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bdev(0x559005ca9800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bdev(0x559005ca9800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bdev(0x559005ca9800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bdev(0x559005ca9800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bluefs mount
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bluefs mount shared_bdev_used = 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: RocksDB version: 7.9.2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Git sha 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Compile date 2025-10-30 15:42:43
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: DB SUMMARY
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: DB Session ID:  BRSQNPZ8VAPD8X1H06XT
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: CURRENT file:  CURRENT
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: IDENTITY file:  IDENTITY
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                         Options.error_if_exists: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                       Options.create_if_missing: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                         Options.paranoid_checks: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                                     Options.env: 0x559004ea3ea0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                                Options.info_log: 0x559005f2a8a0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.max_file_opening_threads: 16
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                              Options.statistics: (nil)
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                               Options.use_fsync: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                       Options.max_log_file_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                         Options.allow_fallocate: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.use_direct_reads: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.create_missing_column_families: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                              Options.db_log_dir: 
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                                 Options.wal_dir: db.wal
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.advise_random_on_open: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.write_buffer_manager: 0x559004f04b40
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                            Options.rate_limiter: (nil)
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.unordered_write: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                               Options.row_cache: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                              Options.wal_filter: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.allow_ingest_behind: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.two_write_queues: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.manual_wal_flush: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.wal_compression: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.atomic_flush: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                 Options.log_readahead_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.allow_data_in_errors: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.db_host_id: __hostname__
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.max_background_jobs: 4
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.max_background_compactions: -1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.max_subcompactions: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.max_open_files: -1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.bytes_per_sync: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.max_background_flushes: -1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Compression algorithms supported:
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: #011kZSTD supported: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: #011kXpressCompression supported: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: #011kBZip2Compression supported: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: #011kLZ4Compression supported: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: #011kZlibCompression supported: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: #011kSnappyCompression supported: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f2ac60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559004ea78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f2ac60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559004ea78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f2ac60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559004ea78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f2ac60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559004ea78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f2ac60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559004ea78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f2ac60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559004ea78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f2ac60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559004ea78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f2ac80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559004ea7a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f2ac80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559004ea7a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f2ac80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559004ea7a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a8ebe6ad-7e9e-4cac-a511-dfc0be6f711e
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843328150787, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843328153808, "job": 1, "event": "recovery_finished"}
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Dec  4 05:15:28 np0005545273 python3[87130]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: freelist init
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: freelist _read_cfg
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bluefs umount
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bdev(0x559005ca9800 /var/lib/ceph/osd/ceph-1/block) close
Dec  4 05:15:28 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bdev(0x559005ca9800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bdev(0x559005ca9800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bdev(0x559005ca9800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bdev(0x559005ca9800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bluefs mount
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bluefs mount shared_bdev_used = 27262976
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: RocksDB version: 7.9.2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Git sha 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Compile date 2025-10-30 15:42:43
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: DB SUMMARY
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: DB Session ID:  BRSQNPZ8VAPD8X1H06XS
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: CURRENT file:  CURRENT
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: IDENTITY file:  IDENTITY
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                         Options.error_if_exists: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                       Options.create_if_missing: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                         Options.paranoid_checks: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                                     Options.env: 0x559005cefdc0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                                Options.info_log: 0x559005f2b340
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.max_file_opening_threads: 16
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                              Options.statistics: (nil)
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                               Options.use_fsync: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                       Options.max_log_file_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                         Options.allow_fallocate: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.use_direct_reads: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.create_missing_column_families: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                              Options.db_log_dir: 
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                                 Options.wal_dir: db.wal
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.advise_random_on_open: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.write_buffer_manager: 0x559004f05900
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                            Options.rate_limiter: (nil)
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.unordered_write: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                               Options.row_cache: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                              Options.wal_filter: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.allow_ingest_behind: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.two_write_queues: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.manual_wal_flush: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.wal_compression: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.atomic_flush: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                 Options.log_readahead_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.allow_data_in_errors: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.db_host_id: __hostname__
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.max_background_jobs: 4
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.max_background_compactions: -1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.max_subcompactions: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.max_open_files: -1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.bytes_per_sync: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.max_background_flushes: -1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Compression algorithms supported:
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: #011kZSTD supported: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: #011kXpressCompression supported: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: #011kBZip2Compression supported: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: #011kLZ4Compression supported: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: #011kZlibCompression supported: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: #011kSnappyCompression supported: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f77680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559004ea78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f77680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559004ea78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f77680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559004ea78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f77680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559004ea78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f77680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559004ea78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f77680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559004ea78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f77680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559004ea78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f77800)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559004ea74b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f77800)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559004ea74b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559005f77800)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559004ea74b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a8ebe6ad-7e9e-4cac-a511-dfc0be6f711e
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843328202732, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  4 05:15:28 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Dec  4 05:15:28 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec  4 05:15:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:15:28 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:15:28 np0005545273 ceph-mgr[75651]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Dec  4 05:15:28 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Dec  4 05:15:28 np0005545273 podman[87333]: 2025-12-04 10:15:28.224488898 +0000 UTC m=+0.031473789 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:15:28 np0005545273 podman[87333]: 2025-12-04 10:15:28.382316091 +0000 UTC m=+0.189300962 container create 52c2f453236930da8f45ed995fc55321fe5c3882b721514452e4fc5a84b9abac (image=quay.io/ceph/ceph:v20, name=nostalgic_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843328384347, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843328, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a8ebe6ad-7e9e-4cac-a511-dfc0be6f711e", "db_session_id": "BRSQNPZ8VAPD8X1H06XS", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843328391359, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843328, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a8ebe6ad-7e9e-4cac-a511-dfc0be6f711e", "db_session_id": "BRSQNPZ8VAPD8X1H06XS", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843328438476, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843328, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a8ebe6ad-7e9e-4cac-a511-dfc0be6f711e", "db_session_id": "BRSQNPZ8VAPD8X1H06XS", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:15:28 np0005545273 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/371201250; not ready for session (expect reconnect)
Dec  4 05:15:28 np0005545273 systemd[1]: Started libpod-conmon-52c2f453236930da8f45ed995fc55321fe5c3882b721514452e4fc5a84b9abac.scope.
Dec  4 05:15:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  4 05:15:28 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec  4 05:15:28 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  4 05:15:28 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:15:28 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce60a5f7abd3f858575f11544dc36ebd2518d343c9a3513766be8049490a5ea1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:28 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce60a5f7abd3f858575f11544dc36ebd2518d343c9a3513766be8049490a5ea1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:28 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce60a5f7abd3f858575f11544dc36ebd2518d343c9a3513766be8049490a5ea1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843328640202, "job": 1, "event": "recovery_finished"}
Dec  4 05:15:28 np0005545273 ceph-osd[87071]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec  4 05:15:28 np0005545273 podman[87333]: 2025-12-04 10:15:28.649569434 +0000 UTC m=+0.456554335 container init 52c2f453236930da8f45ed995fc55321fe5c3882b721514452e4fc5a84b9abac (image=quay.io/ceph/ceph:v20, name=nostalgic_jang, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:15:28 np0005545273 podman[87333]: 2025-12-04 10:15:28.658449321 +0000 UTC m=+0.465434192 container start 52c2f453236930da8f45ed995fc55321fe5c3882b721514452e4fc5a84b9abac (image=quay.io/ceph/ceph:v20, name=nostalgic_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:15:28 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  4 05:15:28 np0005545273 podman[87333]: 2025-12-04 10:15:28.683199485 +0000 UTC m=+0.490184386 container attach 52c2f453236930da8f45ed995fc55321fe5c3882b721514452e4fc5a84b9abac (image=quay.io/ceph/ceph:v20, name=nostalgic_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:15:28 np0005545273 podman[87643]: 2025-12-04 10:15:28.879152068 +0000 UTC m=+0.022911400 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:15:28 np0005545273 podman[87643]: 2025-12-04 10:15:28.981276081 +0000 UTC m=+0.125035393 container create 29217b6c5e0dee3a62decebddd3c4c31f90cadb8b111e4bd36d3a78bfdf70932 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_greider, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec  4 05:15:29 np0005545273 systemd[1]: Started libpod-conmon-29217b6c5e0dee3a62decebddd3c4c31f90cadb8b111e4bd36d3a78bfdf70932.scope.
Dec  4 05:15:29 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:15:29 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:29 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:29 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec  4 05:15:29 np0005545273 ceph-mon[75358]: Deploying daemon osd.2 on compute-0
Dec  4 05:15:29 np0005545273 podman[87643]: 2025-12-04 10:15:29.179617292 +0000 UTC m=+0.323376634 container init 29217b6c5e0dee3a62decebddd3c4c31f90cadb8b111e4bd36d3a78bfdf70932 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:15:29 np0005545273 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55900610fc00
Dec  4 05:15:29 np0005545273 ceph-osd[87071]: rocksdb: DB pointer 0x5590060e4000
Dec  4 05:15:29 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  4 05:15:29 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Dec  4 05:15:29 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Dec  4 05:15:29 np0005545273 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  4 05:15:29 np0005545273 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1.0 total, 1.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.181       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.181       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.181       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.18              0.00         1    0.181       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1.0 total, 1.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1.0 total, 1.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1.0 total, 1.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 460.80 MB usage: 0
Dec  4 05:15:29 np0005545273 podman[87643]: 2025-12-04 10:15:29.187814932 +0000 UTC m=+0.331574244 container start 29217b6c5e0dee3a62decebddd3c4c31f90cadb8b111e4bd36d3a78bfdf70932 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_greider, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  4 05:15:29 np0005545273 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec  4 05:15:29 np0005545273 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec  4 05:15:29 np0005545273 ceph-osd[87071]: _get_class not permitted to load lua
Dec  4 05:15:29 np0005545273 ceph-osd[87071]: _get_class not permitted to load sdk
Dec  4 05:15:29 np0005545273 fervent_greider[87659]: 167 167
Dec  4 05:15:29 np0005545273 ceph-osd[87071]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec  4 05:15:29 np0005545273 conmon[87659]: conmon 29217b6c5e0dee3a62de <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-29217b6c5e0dee3a62decebddd3c4c31f90cadb8b111e4bd36d3a78bfdf70932.scope/container/memory.events
Dec  4 05:15:29 np0005545273 systemd[1]: libpod-29217b6c5e0dee3a62decebddd3c4c31f90cadb8b111e4bd36d3a78bfdf70932.scope: Deactivated successfully.
Dec  4 05:15:29 np0005545273 ceph-osd[87071]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec  4 05:15:29 np0005545273 ceph-osd[87071]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec  4 05:15:29 np0005545273 ceph-osd[87071]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec  4 05:15:29 np0005545273 ceph-osd[87071]: osd.1 0 load_pgs
Dec  4 05:15:29 np0005545273 ceph-osd[87071]: osd.1 0 load_pgs opened 0 pgs
Dec  4 05:15:29 np0005545273 ceph-osd[87071]: osd.1 0 log_to_monitors true
Dec  4 05:15:29 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1[87067]: 2025-12-04T10:15:29.193+0000 7f1f9b3d38c0 -1 osd.1 0 log_to_monitors true
Dec  4 05:15:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Dec  4 05:15:29 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1594570567,v1:192.168.122.100:6807/1594570567]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Dec  4 05:15:29 np0005545273 podman[87643]: 2025-12-04 10:15:29.211619933 +0000 UTC m=+0.355379245 container attach 29217b6c5e0dee3a62decebddd3c4c31f90cadb8b111e4bd36d3a78bfdf70932 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_greider, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec  4 05:15:29 np0005545273 podman[87643]: 2025-12-04 10:15:29.212053414 +0000 UTC m=+0.355812726 container died 29217b6c5e0dee3a62decebddd3c4c31f90cadb8b111e4bd36d3a78bfdf70932 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_greider, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:15:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec  4 05:15:29 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3245618866' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec  4 05:15:29 np0005545273 nostalgic_jang[87581]: 
Dec  4 05:15:29 np0005545273 nostalgic_jang[87581]: {"fsid":"f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":82,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":8,"num_osds":3,"num_up_osds":0,"osd_up_since":0,"num_in_osds":3,"osd_in_since":1764843314,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-12-04T10:14:03:532003+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-12-04T10:14:03.534445+0000","services":{}},"progress_events":{}}
Dec  4 05:15:29 np0005545273 systemd[1]: libpod-52c2f453236930da8f45ed995fc55321fe5c3882b721514452e4fc5a84b9abac.scope: Deactivated successfully.
Dec  4 05:15:29 np0005545273 conmon[87581]: conmon 52c2f453236930da8f45 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-52c2f453236930da8f45ed995fc55321fe5c3882b721514452e4fc5a84b9abac.scope/container/memory.events
Dec  4 05:15:29 np0005545273 systemd[1]: var-lib-containers-storage-overlay-fc03c0988913cfb1643bbe2e15cfe1b6a71db12843130242cf8351aa4acb3f69-merged.mount: Deactivated successfully.
Dec  4 05:15:29 np0005545273 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/371201250; not ready for session (expect reconnect)
Dec  4 05:15:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  4 05:15:29 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec  4 05:15:29 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  4 05:15:29 np0005545273 podman[87643]: 2025-12-04 10:15:29.632068396 +0000 UTC m=+0.775827708 container remove 29217b6c5e0dee3a62decebddd3c4c31f90cadb8b111e4bd36d3a78bfdf70932 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:15:29 np0005545273 podman[87333]: 2025-12-04 10:15:29.66868619 +0000 UTC m=+1.475671071 container died 52c2f453236930da8f45ed995fc55321fe5c3882b721514452e4fc5a84b9abac (image=quay.io/ceph/ceph:v20, name=nostalgic_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:15:29 np0005545273 systemd[1]: libpod-conmon-29217b6c5e0dee3a62decebddd3c4c31f90cadb8b111e4bd36d3a78bfdf70932.scope: Deactivated successfully.
Dec  4 05:15:29 np0005545273 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  4 05:15:30 np0005545273 systemd[1]: var-lib-containers-storage-overlay-ce60a5f7abd3f858575f11544dc36ebd2518d343c9a3513766be8049490a5ea1-merged.mount: Deactivated successfully.
Dec  4 05:15:30 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec  4 05:15:30 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec  4 05:15:30 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Dec  4 05:15:30 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  4 05:15:30 np0005545273 ceph-mon[75358]: from='osd.1 [v2:192.168.122.100:6806/1594570567,v1:192.168.122.100:6807/1594570567]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Dec  4 05:15:30 np0005545273 podman[87735]: 2025-12-04 10:15:30.232333848 +0000 UTC m=+0.142360886 container create 0f5af7172700cf5104555085da59e090f632bb1da69380c06ffe33ebf63ccd31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate-test, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec  4 05:15:30 np0005545273 podman[87735]: 2025-12-04 10:15:30.166406439 +0000 UTC m=+0.076433487 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:15:30 np0005545273 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/371201250; not ready for session (expect reconnect)
Dec  4 05:15:30 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  4 05:15:30 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec  4 05:15:30 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  4 05:15:30 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  4 05:15:30 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1594570567,v1:192.168.122.100:6807/1594570567]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec  4 05:15:30 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e9 e9: 3 total, 0 up, 3 in
Dec  4 05:15:30 np0005545273 systemd[1]: Started libpod-conmon-0f5af7172700cf5104555085da59e090f632bb1da69380c06ffe33ebf63ccd31.scope.
Dec  4 05:15:30 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 0 up, 3 in
Dec  4 05:15:30 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Dec  4 05:15:30 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  4 05:15:30 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  4 05:15:30 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1594570567,v1:192.168.122.100:6807/1594570567]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec  4 05:15:30 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.02 at location {host=compute-0,root=default}
Dec  4 05:15:30 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  4 05:15:30 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec  4 05:15:30 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  4 05:15:30 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec  4 05:15:30 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  4 05:15:30 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec  4 05:15:30 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  4 05:15:30 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:15:30 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c4ee30e26f794c5018c5998867738848d84811941df46a62c9c655409517446/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:30 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c4ee30e26f794c5018c5998867738848d84811941df46a62c9c655409517446/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:30 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c4ee30e26f794c5018c5998867738848d84811941df46a62c9c655409517446/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:30 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c4ee30e26f794c5018c5998867738848d84811941df46a62c9c655409517446/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:30 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c4ee30e26f794c5018c5998867738848d84811941df46a62c9c655409517446/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:30 np0005545273 podman[87735]: 2025-12-04 10:15:30.983816011 +0000 UTC m=+0.893843129 container init 0f5af7172700cf5104555085da59e090f632bb1da69380c06ffe33ebf63ccd31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate-test, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  4 05:15:30 np0005545273 podman[87735]: 2025-12-04 10:15:30.992703818 +0000 UTC m=+0.902730836 container start 0f5af7172700cf5104555085da59e090f632bb1da69380c06ffe33ebf63ccd31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:15:31 np0005545273 podman[87735]: 2025-12-04 10:15:31.044353469 +0000 UTC m=+0.954380517 container attach 0f5af7172700cf5104555085da59e090f632bb1da69380c06ffe33ebf63ccd31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate-test, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec  4 05:15:31 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate-test[87751]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec  4 05:15:31 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate-test[87751]:                            [--no-systemd] [--no-tmpfs]
Dec  4 05:15:31 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate-test[87751]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec  4 05:15:31 np0005545273 systemd[1]: libpod-0f5af7172700cf5104555085da59e090f632bb1da69380c06ffe33ebf63ccd31.scope: Deactivated successfully.
Dec  4 05:15:31 np0005545273 ceph-mon[75358]: from='osd.1 [v2:192.168.122.100:6806/1594570567,v1:192.168.122.100:6807/1594570567]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec  4 05:15:31 np0005545273 ceph-mon[75358]: from='osd.1 [v2:192.168.122.100:6806/1594570567,v1:192.168.122.100:6807/1594570567]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec  4 05:15:31 np0005545273 podman[87333]: 2025-12-04 10:15:31.282599954 +0000 UTC m=+3.089584825 container remove 52c2f453236930da8f45ed995fc55321fe5c3882b721514452e4fc5a84b9abac (image=quay.io/ceph/ceph:v20, name=nostalgic_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:15:31 np0005545273 podman[87735]: 2025-12-04 10:15:31.283823484 +0000 UTC m=+1.193850512 container died 0f5af7172700cf5104555085da59e090f632bb1da69380c06ffe33ebf63ccd31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate-test, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec  4 05:15:31 np0005545273 systemd[1]: libpod-conmon-52c2f453236930da8f45ed995fc55321fe5c3882b721514452e4fc5a84b9abac.scope: Deactivated successfully.
Dec  4 05:15:31 np0005545273 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/371201250; not ready for session (expect reconnect)
Dec  4 05:15:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  4 05:15:31 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec  4 05:15:31 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  4 05:15:31 np0005545273 systemd[1]: var-lib-containers-storage-overlay-9c4ee30e26f794c5018c5998867738848d84811941df46a62c9c655409517446-merged.mount: Deactivated successfully.
Dec  4 05:15:31 np0005545273 podman[87735]: 2025-12-04 10:15:31.726779366 +0000 UTC m=+1.636806404 container remove 0f5af7172700cf5104555085da59e090f632bb1da69380c06ffe33ebf63ccd31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:15:31 np0005545273 systemd[1]: libpod-conmon-0f5af7172700cf5104555085da59e090f632bb1da69380c06ffe33ebf63ccd31.scope: Deactivated successfully.
Dec  4 05:15:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Dec  4 05:15:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  4 05:15:31 np0005545273 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  4 05:15:32 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1594570567,v1:192.168.122.100:6807/1594570567]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  4 05:15:32 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e10 e10: 3 total, 0 up, 3 in
Dec  4 05:15:32 np0005545273 ceph-osd[87071]: osd.1 0 done with init, starting boot process
Dec  4 05:15:32 np0005545273 ceph-osd[87071]: osd.1 0 start_boot
Dec  4 05:15:32 np0005545273 ceph-osd[87071]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec  4 05:15:32 np0005545273 ceph-osd[87071]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec  4 05:15:32 np0005545273 ceph-osd[87071]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec  4 05:15:32 np0005545273 ceph-osd[87071]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec  4 05:15:32 np0005545273 ceph-osd[87071]: osd.1 0  bench count 12288000 bsize 4 KiB
Dec  4 05:15:32 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 0 up, 3 in
Dec  4 05:15:32 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  4 05:15:32 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec  4 05:15:32 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  4 05:15:32 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec  4 05:15:32 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  4 05:15:32 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  4 05:15:32 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec  4 05:15:32 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  4 05:15:32 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  4 05:15:32 np0005545273 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1594570567; not ready for session (expect reconnect)
Dec  4 05:15:32 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  4 05:15:32 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec  4 05:15:32 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  4 05:15:32 np0005545273 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/371201250; not ready for session (expect reconnect)
Dec  4 05:15:32 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  4 05:15:33 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:15:33 np0005545273 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1594570567; not ready for session (expect reconnect)
Dec  4 05:15:33 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  4 05:15:33 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec  4 05:15:33 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  4 05:15:33 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec  4 05:15:33 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  4 05:15:33 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  4 05:15:33 np0005545273 ceph-mon[75358]: from='osd.1 [v2:192.168.122.100:6806/1594570567,v1:192.168.122.100:6807/1594570567]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  4 05:15:33 np0005545273 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/371201250; not ready for session (expect reconnect)
Dec  4 05:15:33 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  4 05:15:33 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec  4 05:15:33 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  4 05:15:33 np0005545273 systemd[1]: Reloading.
Dec  4 05:15:33 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:15:33 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:15:33 np0005545273 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  4 05:15:34 np0005545273 systemd[1]: Reloading.
Dec  4 05:15:34 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:15:34 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:15:34 np0005545273 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1594570567; not ready for session (expect reconnect)
Dec  4 05:15:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  4 05:15:34 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  4 05:15:34 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec  4 05:15:34 np0005545273 systemd[1]: Starting Ceph osd.2 for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d...
Dec  4 05:15:34 np0005545273 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/371201250; not ready for session (expect reconnect)
Dec  4 05:15:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  4 05:15:34 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec  4 05:15:34 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  4 05:15:34 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  4 05:15:34 np0005545273 podman[87915]: 2025-12-04 10:15:34.667972571 +0000 UTC m=+0.031542040 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:15:34 np0005545273 podman[87915]: 2025-12-04 10:15:34.918507307 +0000 UTC m=+0.282076716 container create fd17ca82352d70212aba5f65930f8f4fefda2ccf705ca257170db0e9d3ddacbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  4 05:15:34 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:15:34 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59182602b06dc375eb3ad82f6200470d0839dd044ddedc91ba7a76c472c047ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:35 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59182602b06dc375eb3ad82f6200470d0839dd044ddedc91ba7a76c472c047ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:35 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59182602b06dc375eb3ad82f6200470d0839dd044ddedc91ba7a76c472c047ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:35 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59182602b06dc375eb3ad82f6200470d0839dd044ddedc91ba7a76c472c047ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:35 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59182602b06dc375eb3ad82f6200470d0839dd044ddedc91ba7a76c472c047ab/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:35 np0005545273 podman[87915]: 2025-12-04 10:15:35.027531428 +0000 UTC m=+0.391100927 container init fd17ca82352d70212aba5f65930f8f4fefda2ccf705ca257170db0e9d3ddacbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec  4 05:15:35 np0005545273 podman[87915]: 2025-12-04 10:15:35.036311102 +0000 UTC m=+0.399880511 container start fd17ca82352d70212aba5f65930f8f4fefda2ccf705ca257170db0e9d3ddacbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec  4 05:15:35 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate[87931]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  4 05:15:35 np0005545273 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1594570567; not ready for session (expect reconnect)
Dec  4 05:15:35 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  4 05:15:35 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec  4 05:15:35 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  4 05:15:35 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate[87931]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  4 05:15:35 np0005545273 bash[87915]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  4 05:15:35 np0005545273 podman[87915]: 2025-12-04 10:15:35.336918781 +0000 UTC m=+0.700488240 container attach fd17ca82352d70212aba5f65930f8f4fefda2ccf705ca257170db0e9d3ddacbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec  4 05:15:35 np0005545273 bash[87915]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  4 05:15:35 np0005545273 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/371201250; not ready for session (expect reconnect)
Dec  4 05:15:35 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  4 05:15:35 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec  4 05:15:35 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  4 05:15:35 np0005545273 lvm[88017]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:15:35 np0005545273 lvm[88016]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:15:35 np0005545273 lvm[88017]: VG ceph_vg1 finished
Dec  4 05:15:35 np0005545273 lvm[88016]: VG ceph_vg0 finished
Dec  4 05:15:35 np0005545273 ceph-mgr[75651]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  4 05:15:35 np0005545273 lvm[88019]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:15:35 np0005545273 lvm[88019]: VG ceph_vg2 finished
Dec  4 05:15:36 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate[87931]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec  4 05:15:36 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate[87931]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  4 05:15:36 np0005545273 bash[87915]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec  4 05:15:36 np0005545273 bash[87915]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  4 05:15:36 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate[87931]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  4 05:15:36 np0005545273 bash[87915]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  4 05:15:36 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate[87931]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  4 05:15:36 np0005545273 bash[87915]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  4 05:15:36 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate[87931]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Dec  4 05:15:36 np0005545273 bash[87915]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Dec  4 05:15:36 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate[87931]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec  4 05:15:36 np0005545273 bash[87915]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec  4 05:15:36 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate[87931]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Dec  4 05:15:36 np0005545273 bash[87915]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Dec  4 05:15:36 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate[87931]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec  4 05:15:36 np0005545273 bash[87915]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec  4 05:15:36 np0005545273 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1594570567; not ready for session (expect reconnect)
Dec  4 05:15:36 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  4 05:15:36 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec  4 05:15:36 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  4 05:15:36 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate[87931]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  4 05:15:36 np0005545273 bash[87915]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  4 05:15:36 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate[87931]: --> ceph-volume lvm activate successful for osd ID: 2
Dec  4 05:15:36 np0005545273 bash[87915]: --> ceph-volume lvm activate successful for osd ID: 2
Dec  4 05:15:36 np0005545273 systemd[1]: libpod-fd17ca82352d70212aba5f65930f8f4fefda2ccf705ca257170db0e9d3ddacbf.scope: Deactivated successfully.
Dec  4 05:15:36 np0005545273 podman[87915]: 2025-12-04 10:15:36.274757963 +0000 UTC m=+1.638327372 container died fd17ca82352d70212aba5f65930f8f4fefda2ccf705ca257170db0e9d3ddacbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  4 05:15:36 np0005545273 systemd[1]: libpod-fd17ca82352d70212aba5f65930f8f4fefda2ccf705ca257170db0e9d3ddacbf.scope: Consumed 1.772s CPU time.
Dec  4 05:15:36 np0005545273 ceph-osd[86021]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 4.092 iops: 1047.652 elapsed_sec: 2.864
Dec  4 05:15:36 np0005545273 ceph-osd[86021]: log_channel(cluster) log [WRN] : OSD bench result of 1047.651829 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  4 05:15:36 np0005545273 ceph-osd[86021]: osd.0 0 waiting for initial osdmap
Dec  4 05:15:36 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0[86017]: 2025-12-04T10:15:36.312+0000 7f8fb551c640 -1 osd.0 0 waiting for initial osdmap
Dec  4 05:15:36 np0005545273 ceph-osd[86021]: osd.0 10 crush map has features 288514050185494528, adjusting msgr requires for clients
Dec  4 05:15:36 np0005545273 ceph-osd[86021]: osd.0 10 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Dec  4 05:15:36 np0005545273 ceph-osd[86021]: osd.0 10 crush map has features 3314932999778484224, adjusting msgr requires for osds
Dec  4 05:15:36 np0005545273 ceph-osd[86021]: osd.0 10 check_osdmap_features require_osd_release unknown -> tentacle
Dec  4 05:15:36 np0005545273 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/371201250; not ready for session (expect reconnect)
Dec  4 05:15:36 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  4 05:15:36 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec  4 05:15:36 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  4 05:15:36 np0005545273 ceph-osd[86021]: osd.0 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  4 05:15:36 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-0[86017]: 2025-12-04T10:15:36.492+0000 7f8fb0321640 -1 osd.0 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  4 05:15:36 np0005545273 ceph-osd[86021]: osd.0 10 set_numa_affinity not setting numa affinity
Dec  4 05:15:36 np0005545273 ceph-osd[86021]: osd.0 10 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Dec  4 05:15:36 np0005545273 systemd[1]: var-lib-containers-storage-overlay-59182602b06dc375eb3ad82f6200470d0839dd044ddedc91ba7a76c472c047ab-merged.mount: Deactivated successfully.
Dec  4 05:15:36 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  4 05:15:36 np0005545273 podman[87915]: 2025-12-04 10:15:36.731195825 +0000 UTC m=+2.094765234 container remove fd17ca82352d70212aba5f65930f8f4fefda2ccf705ca257170db0e9d3ddacbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2-activate, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  4 05:15:36 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Dec  4 05:15:36 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  4 05:15:36 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Dec  4 05:15:36 np0005545273 ceph-mon[75358]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/371201250,v1:192.168.122.100:6803/371201250] boot
Dec  4 05:15:36 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Dec  4 05:15:36 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  4 05:15:36 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec  4 05:15:36 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  4 05:15:36 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec  4 05:15:36 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  4 05:15:36 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec  4 05:15:36 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  4 05:15:36 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  4 05:15:36 np0005545273 ceph-osd[86021]: osd.0 11 state: booting -> active
Dec  4 05:15:37 np0005545273 podman[88185]: 2025-12-04 10:15:37.002929178 +0000 UTC m=+0.059011821 container create 743bc5e794db2e1212d983a5a84a30b8ad953b57b314c50b155b01df81070c42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:15:37 np0005545273 podman[88185]: 2025-12-04 10:15:36.972606357 +0000 UTC m=+0.028689030 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:15:37 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9fb9ee458c50c034bdc13802365db35a6d061f56287a88712d929a2b741cba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:37 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9fb9ee458c50c034bdc13802365db35a6d061f56287a88712d929a2b741cba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:37 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9fb9ee458c50c034bdc13802365db35a6d061f56287a88712d929a2b741cba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:37 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9fb9ee458c50c034bdc13802365db35a6d061f56287a88712d929a2b741cba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:37 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9fb9ee458c50c034bdc13802365db35a6d061f56287a88712d929a2b741cba/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:37 np0005545273 podman[88185]: 2025-12-04 10:15:37.15339742 +0000 UTC m=+0.209480073 container init 743bc5e794db2e1212d983a5a84a30b8ad953b57b314c50b155b01df81070c42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  4 05:15:37 np0005545273 podman[88185]: 2025-12-04 10:15:37.164482771 +0000 UTC m=+0.220565404 container start 743bc5e794db2e1212d983a5a84a30b8ad953b57b314c50b155b01df81070c42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec  4 05:15:37 np0005545273 bash[88185]: 743bc5e794db2e1212d983a5a84a30b8ad953b57b314c50b155b01df81070c42
Dec  4 05:15:37 np0005545273 systemd[1]: Started Ceph osd.2 for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d.
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: set uid:gid to 167:167 (ceph:ceph)
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: pidfile_write: ignore empty --pid-file
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) close
Dec  4 05:15:37 np0005545273 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1594570567; not ready for session (expect reconnect)
Dec  4 05:15:37 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  4 05:15:37 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec  4 05:15:37 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) close
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) close
Dec  4 05:15:37 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:15:37 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:37 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) close
Dec  4 05:15:37 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) close
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4a400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4a400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4a400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4a400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4a400 /var/lib/ceph/osd/ceph-2/block) close
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4a000 /var/lib/ceph/osd/ceph-2/block) close
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: load: jerasure load: lrc 
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) close
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) close
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) close
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) close
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) close
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a1d4bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a29eb800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a29eb800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a29eb800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a29eb800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bluefs mount
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bluefs mount shared_bdev_used = 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: RocksDB version: 7.9.2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Git sha 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Compile date 2025-10-30 15:42:43
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: DB SUMMARY
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: DB Session ID:  7MY0ZPEWWGRZELY8V8L4
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: CURRENT file:  CURRENT
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: IDENTITY file:  IDENTITY
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                         Options.error_if_exists: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                       Options.create_if_missing: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                         Options.paranoid_checks: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                                     Options.env: 0x55c0a1bdbea0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                                Options.info_log: 0x55c0a2c488a0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.max_file_opening_threads: 16
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                              Options.statistics: (nil)
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                               Options.use_fsync: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                       Options.max_log_file_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                         Options.allow_fallocate: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.use_direct_reads: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.create_missing_column_families: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                              Options.db_log_dir: 
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                                 Options.wal_dir: db.wal
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.advise_random_on_open: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.write_buffer_manager: 0x55c0a1c40b40
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                            Options.rate_limiter: (nil)
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.unordered_write: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                               Options.row_cache: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                              Options.wal_filter: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.allow_ingest_behind: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.two_write_queues: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.manual_wal_flush: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.wal_compression: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.atomic_flush: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                 Options.log_readahead_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.allow_data_in_errors: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.db_host_id: __hostname__
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.max_background_jobs: 4
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.max_background_compactions: -1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.max_subcompactions: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.max_open_files: -1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.bytes_per_sync: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.max_background_flushes: -1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Compression algorithms supported:
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: #011kZSTD supported: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: #011kXpressCompression supported: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: #011kBZip2Compression supported: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: #011kLZ4Compression supported: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: #011kZlibCompression supported: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: #011kSnappyCompression supported: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c48c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c0a1bdf8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c48c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c0a1bdf8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c48c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c0a1bdf8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c48c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c0a1bdf8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c48c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c0a1bdf8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c48c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c0a1bdf8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c48c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c0a1bdf8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c48c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c0a1bdfa30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c48c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c0a1bdfa30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c48c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c0a1bdfa30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7bb5ab31-9ba6-46d6-87fa-5957b282c9d1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843337609414, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843337611523, "job": 1, "event": "recovery_finished"}
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: freelist init
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: freelist _read_cfg
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bluefs umount
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a29eb800 /var/lib/ceph/osd/ceph-2/block) close
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a29eb800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a29eb800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a29eb800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bdev(0x55c0a29eb800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bluefs mount
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bluefs mount shared_bdev_used = 27262976
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: RocksDB version: 7.9.2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Git sha 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Compile date 2025-10-30 15:42:43
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: DB SUMMARY
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: DB Session ID:  7MY0ZPEWWGRZELY8V8L5
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: CURRENT file:  CURRENT
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: IDENTITY file:  IDENTITY
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                         Options.error_if_exists: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                       Options.create_if_missing: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                         Options.paranoid_checks: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                                     Options.env: 0x55c0a1bdbd50
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                                Options.info_log: 0x55c0a2c49b00
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.max_file_opening_threads: 16
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                              Options.statistics: (nil)
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                               Options.use_fsync: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                       Options.max_log_file_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                         Options.allow_fallocate: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.use_direct_reads: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.create_missing_column_families: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                              Options.db_log_dir: 
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                                 Options.wal_dir: db.wal
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.advise_random_on_open: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.write_buffer_manager: 0x55c0a1c41900
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                            Options.rate_limiter: (nil)
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.unordered_write: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                               Options.row_cache: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                              Options.wal_filter: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.allow_ingest_behind: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.two_write_queues: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.manual_wal_flush: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.wal_compression: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.atomic_flush: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                 Options.log_readahead_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.allow_data_in_errors: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.db_host_id: __hostname__
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.max_background_jobs: 4
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.max_background_compactions: -1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.max_subcompactions: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.max_open_files: -1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.bytes_per_sync: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.max_background_flushes: -1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Compression algorithms supported:
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: #011kZSTD supported: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: #011kXpressCompression supported: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: #011kBZip2Compression supported: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: #011kLZ4Compression supported: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: #011kZlibCompression supported: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: #011kSnappyCompression supported: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c82220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c0a1bdfa30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c82220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c0a1bdfa30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c82220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c0a1bdfa30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c82220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c0a1bdfa30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c82220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c0a1bdfa30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c82220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c0a1bdfa30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c82220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c0a1bdfa30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c82300)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c0a1bdf4b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c82300)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c0a1bdf4b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:           Options.merge_operator: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.compaction_filter_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.sst_partitioner_factory: None
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c0a2c82300)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c0a1bdf4b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.write_buffer_size: 16777216
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.max_write_buffer_number: 64
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.compression: LZ4
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.num_levels: 7
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.level: 32767
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.compression_opts.strategy: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                  Options.compression_opts.enabled: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.arena_block_size: 1048576
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.disable_auto_compactions: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.inplace_update_support: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.bloom_locality: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                    Options.max_successive_merges: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.paranoid_file_checks: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.force_consistency_checks: 1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.report_bg_io_stats: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                               Options.ttl: 2592000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                       Options.enable_blob_files: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                           Options.min_blob_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                          Options.blob_file_size: 268435456
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb:                Options.blob_file_starting_level: 0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7bb5ab31-9ba6-46d6-87fa-5957b282c9d1
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843337657731, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843337662187, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843337, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7bb5ab31-9ba6-46d6-87fa-5957b282c9d1", "db_session_id": "7MY0ZPEWWGRZELY8V8L5", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843337665482, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843337, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7bb5ab31-9ba6-46d6-87fa-5957b282c9d1", "db_session_id": "7MY0ZPEWWGRZELY8V8L5", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843337694386, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843337, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7bb5ab31-9ba6-46d6-87fa-5957b282c9d1", "db_session_id": "7MY0ZPEWWGRZELY8V8L5", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843337696754, "job": 1, "event": "recovery_finished"}
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55c0a2c4bc00
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: DB pointer 0x55c0a2e02000
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: _get_class not permitted to load lua
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: _get_class not permitted to load sdk
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: osd.2 0 load_pgs
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: osd.2 0 load_pgs opened 0 pgs
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.001865 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.001865 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Dec  4 05:15:37 np0005545273 ceph-osd[88205]: osd.2 0 log_to_monitors true
Dec  4 05:15:37 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2[88201]: 2025-12-04T10:15:37.785+0000 7fd8a9ade8c0 -1 osd.2 0 log_to_monitors true
Dec  4 05:15:37 np0005545273 ceph-mon[75358]: OSD bench result of 1047.651829 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  4 05:15:37 np0005545273 ceph-mon[75358]: osd.0 [v2:192.168.122.100:6802/371201250,v1:192.168.122.100:6803/371201250] boot
Dec  4 05:15:37 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:37 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:37 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Dec  4 05:15:37 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1490083487,v1:192.168.122.100:6811/1490083487]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Dec  4 05:15:37 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Dec  4 05:15:37 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e11 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  4 05:15:37 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1490083487,v1:192.168.122.100:6811/1490083487]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec  4 05:15:37 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Dec  4 05:15:37 np0005545273 podman[88721]: 2025-12-04 10:15:37.878323476 +0000 UTC m=+0.061388130 container create a058100114bf0df45397391b5ce86a422d4d14969389eeb0597fd53078bfc69a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_galois, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:15:37 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Dec  4 05:15:37 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  4 05:15:37 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec  4 05:15:37 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Dec  4 05:15:37 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1490083487,v1:192.168.122.100:6811/1490083487]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec  4 05:15:37 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e12 create-or-move crush item name 'osd.2' initial_weight 0.02 at location {host=compute-0,root=default}
Dec  4 05:15:37 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  4 05:15:37 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec  4 05:15:37 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  4 05:15:37 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  4 05:15:37 np0005545273 ceph-mgr[75651]: [devicehealth INFO root] creating mgr pool
Dec  4 05:15:37 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Dec  4 05:15:37 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Dec  4 05:15:37 np0005545273 podman[88721]: 2025-12-04 10:15:37.845822763 +0000 UTC m=+0.028887407 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:15:37 np0005545273 systemd[1]: Started libpod-conmon-a058100114bf0df45397391b5ce86a422d4d14969389eeb0597fd53078bfc69a.scope.
Dec  4 05:15:38 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:15:38 np0005545273 podman[88721]: 2025-12-04 10:15:38.027957389 +0000 UTC m=+0.211022023 container init a058100114bf0df45397391b5ce86a422d4d14969389eeb0597fd53078bfc69a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_galois, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:15:38 np0005545273 podman[88721]: 2025-12-04 10:15:38.038962458 +0000 UTC m=+0.222027072 container start a058100114bf0df45397391b5ce86a422d4d14969389eeb0597fd53078bfc69a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec  4 05:15:38 np0005545273 eager_galois[88738]: 167 167
Dec  4 05:15:38 np0005545273 systemd[1]: libpod-a058100114bf0df45397391b5ce86a422d4d14969389eeb0597fd53078bfc69a.scope: Deactivated successfully.
Dec  4 05:15:38 np0005545273 conmon[88738]: conmon a058100114bf0df45397 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a058100114bf0df45397391b5ce86a422d4d14969389eeb0597fd53078bfc69a.scope/container/memory.events
Dec  4 05:15:38 np0005545273 podman[88721]: 2025-12-04 10:15:38.059902609 +0000 UTC m=+0.242967243 container attach a058100114bf0df45397391b5ce86a422d4d14969389eeb0597fd53078bfc69a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec  4 05:15:38 np0005545273 podman[88721]: 2025-12-04 10:15:38.060574645 +0000 UTC m=+0.243639259 container died a058100114bf0df45397391b5ce86a422d4d14969389eeb0597fd53078bfc69a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_galois, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  4 05:15:38 np0005545273 systemd[1]: var-lib-containers-storage-overlay-5416eeab3661cc2661757b52d28c742df388d9d567265da76da7f206f61704f5-merged.mount: Deactivated successfully.
Dec  4 05:15:38 np0005545273 podman[88721]: 2025-12-04 10:15:38.188503818 +0000 UTC m=+0.371568452 container remove a058100114bf0df45397391b5ce86a422d4d14969389eeb0597fd53078bfc69a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  4 05:15:38 np0005545273 systemd[1]: libpod-conmon-a058100114bf0df45397391b5ce86a422d4d14969389eeb0597fd53078bfc69a.scope: Deactivated successfully.
Dec  4 05:15:38 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:15:38 np0005545273 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1594570567; not ready for session (expect reconnect)
Dec  4 05:15:38 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  4 05:15:38 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec  4 05:15:38 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  4 05:15:38 np0005545273 podman[88764]: 2025-12-04 10:15:38.407711848 +0000 UTC m=+0.073263489 container create 860787ea375524181b97ef13697565c4d6400e03bc37cb14ef5c6855cd9cc7db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:15:38 np0005545273 podman[88764]: 2025-12-04 10:15:38.361880349 +0000 UTC m=+0.027432080 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:15:38 np0005545273 systemd[1]: Started libpod-conmon-860787ea375524181b97ef13697565c4d6400e03bc37cb14ef5c6855cd9cc7db.scope.
Dec  4 05:15:38 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:15:38 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f9e44c5fc4def8409b64cc805bea9f6f2f857e5a81de0325c64c8b181b11674/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:38 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f9e44c5fc4def8409b64cc805bea9f6f2f857e5a81de0325c64c8b181b11674/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:38 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f9e44c5fc4def8409b64cc805bea9f6f2f857e5a81de0325c64c8b181b11674/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:38 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f9e44c5fc4def8409b64cc805bea9f6f2f857e5a81de0325c64c8b181b11674/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:38 np0005545273 podman[88764]: 2025-12-04 10:15:38.549684704 +0000 UTC m=+0.215236365 container init 860787ea375524181b97ef13697565c4d6400e03bc37cb14ef5c6855cd9cc7db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_swartz, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  4 05:15:38 np0005545273 podman[88764]: 2025-12-04 10:15:38.556723516 +0000 UTC m=+0.222275147 container start 860787ea375524181b97ef13697565c4d6400e03bc37cb14ef5c6855cd9cc7db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_swartz, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec  4 05:15:38 np0005545273 podman[88764]: 2025-12-04 10:15:38.575664318 +0000 UTC m=+0.241215939 container attach 860787ea375524181b97ef13697565c4d6400e03bc37cb14ef5c6855cd9cc7db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec  4 05:15:38 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v39: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec  4 05:15:38 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec  4 05:15:38 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec  4 05:15:38 np0005545273 ceph-mon[75358]: from='osd.2 [v2:192.168.122.100:6810/1490083487,v1:192.168.122.100:6811/1490083487]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Dec  4 05:15:38 np0005545273 ceph-mon[75358]: from='osd.2 [v2:192.168.122.100:6810/1490083487,v1:192.168.122.100:6811/1490083487]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec  4 05:15:38 np0005545273 ceph-mon[75358]: from='osd.2 [v2:192.168.122.100:6810/1490083487,v1:192.168.122.100:6811/1490083487]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec  4 05:15:38 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Dec  4 05:15:38 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Dec  4 05:15:38 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e12 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  4 05:15:38 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1490083487,v1:192.168.122.100:6811/1490083487]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  4 05:15:38 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec  4 05:15:38 np0005545273 ceph-osd[88205]: osd.2 0 done with init, starting boot process
Dec  4 05:15:38 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e13 e13: 3 total, 1 up, 3 in
Dec  4 05:15:38 np0005545273 ceph-osd[88205]: osd.2 0 start_boot
Dec  4 05:15:38 np0005545273 ceph-osd[88205]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec  4 05:15:38 np0005545273 ceph-osd[88205]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec  4 05:15:38 np0005545273 ceph-osd[88205]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec  4 05:15:38 np0005545273 ceph-osd[88205]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec  4 05:15:38 np0005545273 ceph-osd[88205]: osd.2 0  bench count 12288000 bsize 4 KiB
Dec  4 05:15:38 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e13 crush map has features 3314933000852226048, adjusting msgr requires
Dec  4 05:15:38 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e13 crush map has features 288514051259236352, adjusting msgr requires
Dec  4 05:15:38 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e13 crush map has features 288514051259236352, adjusting msgr requires
Dec  4 05:15:38 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e13 crush map has features 288514051259236352, adjusting msgr requires
Dec  4 05:15:38 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 1 up, 3 in
Dec  4 05:15:38 np0005545273 ceph-osd[86021]: osd.0 13 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec  4 05:15:38 np0005545273 ceph-osd[86021]: osd.0 13 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Dec  4 05:15:38 np0005545273 ceph-osd[86021]: osd.0 13 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec  4 05:15:38 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  4 05:15:38 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec  4 05:15:38 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  4 05:15:38 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec  4 05:15:38 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  4 05:15:38 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  4 05:15:38 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Dec  4 05:15:38 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Dec  4 05:15:38 np0005545273 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1490083487; not ready for session (expect reconnect)
Dec  4 05:15:38 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  4 05:15:38 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec  4 05:15:38 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  4 05:15:39 np0005545273 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1594570567; not ready for session (expect reconnect)
Dec  4 05:15:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  4 05:15:39 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec  4 05:15:39 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  4 05:15:39 np0005545273 lvm[88858]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:15:39 np0005545273 lvm[88858]: VG ceph_vg0 finished
Dec  4 05:15:39 np0005545273 lvm[88859]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:15:39 np0005545273 lvm[88859]: VG ceph_vg1 finished
Dec  4 05:15:39 np0005545273 lvm[88861]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:15:39 np0005545273 lvm[88861]: VG ceph_vg2 finished
Dec  4 05:15:39 np0005545273 competent_swartz[88780]: {}
Dec  4 05:15:39 np0005545273 systemd[1]: libpod-860787ea375524181b97ef13697565c4d6400e03bc37cb14ef5c6855cd9cc7db.scope: Deactivated successfully.
Dec  4 05:15:39 np0005545273 podman[88764]: 2025-12-04 10:15:39.501751073 +0000 UTC m=+1.167302724 container died 860787ea375524181b97ef13697565c4d6400e03bc37cb14ef5c6855cd9cc7db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_swartz, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec  4 05:15:39 np0005545273 systemd[1]: libpod-860787ea375524181b97ef13697565c4d6400e03bc37cb14ef5c6855cd9cc7db.scope: Consumed 1.483s CPU time.
Dec  4 05:15:39 np0005545273 systemd[1]: var-lib-containers-storage-overlay-9f9e44c5fc4def8409b64cc805bea9f6f2f857e5a81de0325c64c8b181b11674-merged.mount: Deactivated successfully.
Dec  4 05:15:39 np0005545273 podman[88764]: 2025-12-04 10:15:39.742577382 +0000 UTC m=+1.408129013 container remove 860787ea375524181b97ef13697565c4d6400e03bc37cb14ef5c6855cd9cc7db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_swartz, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:15:39 np0005545273 systemd[1]: libpod-conmon-860787ea375524181b97ef13697565c4d6400e03bc37cb14ef5c6855cd9cc7db.scope: Deactivated successfully.
Dec  4 05:15:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:15:39 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:15:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Dec  4 05:15:39 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:39 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec  4 05:15:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e14 e14: 3 total, 1 up, 3 in
Dec  4 05:15:39 np0005545273 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1490083487; not ready for session (expect reconnect)
Dec  4 05:15:39 np0005545273 ceph-mon[75358]: from='osd.2 [v2:192.168.122.100:6810/1490083487,v1:192.168.122.100:6811/1490083487]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  4 05:15:39 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec  4 05:15:39 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Dec  4 05:15:39 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:39 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 1 up, 3 in
Dec  4 05:15:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  4 05:15:39 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec  4 05:15:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  4 05:15:39 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec  4 05:15:39 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  4 05:15:39 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  4 05:15:40 np0005545273 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1594570567; not ready for session (expect reconnect)
Dec  4 05:15:40 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  4 05:15:40 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec  4 05:15:40 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  4 05:15:40 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v42: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec  4 05:15:40 np0005545273 podman[88995]: 2025-12-04 10:15:40.7215609 +0000 UTC m=+0.215713387 container exec 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  4 05:15:40 np0005545273 podman[88995]: 2025-12-04 10:15:40.86168297 +0000 UTC m=+0.355835437 container exec_died 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Dec  4 05:15:40 np0005545273 ceph-osd[87071]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 7.881 iops: 2017.502 elapsed_sec: 1.487
Dec  4 05:15:40 np0005545273 ceph-osd[87071]: log_channel(cluster) log [WRN] : OSD bench result of 2017.501860 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  4 05:15:40 np0005545273 ceph-osd[87071]: osd.1 0 waiting for initial osdmap
Dec  4 05:15:40 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1[87067]: 2025-12-04T10:15:40.913+0000 7f1f97355640 -1 osd.1 0 waiting for initial osdmap
Dec  4 05:15:40 np0005545273 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1490083487; not ready for session (expect reconnect)
Dec  4 05:15:40 np0005545273 ceph-osd[87071]: osd.1 14 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec  4 05:15:40 np0005545273 ceph-osd[87071]: osd.1 14 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Dec  4 05:15:40 np0005545273 ceph-osd[87071]: osd.1 14 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec  4 05:15:40 np0005545273 ceph-osd[87071]: osd.1 14 check_osdmap_features require_osd_release unknown -> tentacle
Dec  4 05:15:40 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  4 05:15:40 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec  4 05:15:40 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  4 05:15:40 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:40 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec  4 05:15:40 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-1[87067]: 2025-12-04T10:15:40.972+0000 7f1f9215a640 -1 osd.1 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  4 05:15:40 np0005545273 ceph-osd[87071]: osd.1 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  4 05:15:40 np0005545273 ceph-osd[87071]: osd.1 14 set_numa_affinity not setting numa affinity
Dec  4 05:15:40 np0005545273 ceph-osd[87071]: osd.1 14 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial no unique device path for loop4: no symlink to loop4 in /dev/disk/by-path
Dec  4 05:15:41 np0005545273 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1594570567; not ready for session (expect reconnect)
Dec  4 05:15:41 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  4 05:15:41 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec  4 05:15:41 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  4 05:15:41 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Dec  4 05:15:42 np0005545273 ceph-osd[87071]: osd.1 14 tick checking mon for new map
Dec  4 05:15:42 np0005545273 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1490083487; not ready for session (expect reconnect)
Dec  4 05:15:42 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  4 05:15:42 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec  4 05:15:42 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  4 05:15:42 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Dec  4 05:15:42 np0005545273 ceph-mon[75358]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/1594570567,v1:192.168.122.100:6807/1594570567] boot
Dec  4 05:15:42 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Dec  4 05:15:42 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  4 05:15:42 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec  4 05:15:42 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  4 05:15:42 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec  4 05:15:42 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  4 05:15:42 np0005545273 ceph-osd[87071]: osd.1 15 state: booting -> active
Dec  4 05:15:42 np0005545273 ceph-mon[75358]: OSD bench result of 2017.501860 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  4 05:15:42 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 15 pg[1.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 pi=[13,15)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:15:42 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:15:42 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:42 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:15:42 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:42 np0005545273 podman[89202]: 2025-12-04 10:15:42.645072902 +0000 UTC m=+0.073241979 container create e0f7b31a26770fd361a8d4eef8b6873e88af2cfc15950b930c32d1f3d5300353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:15:42 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v44: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  4 05:15:42 np0005545273 podman[89202]: 2025-12-04 10:15:42.59991738 +0000 UTC m=+0.028086507 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:15:42 np0005545273 systemd[1]: Started libpod-conmon-e0f7b31a26770fd361a8d4eef8b6873e88af2cfc15950b930c32d1f3d5300353.scope.
Dec  4 05:15:42 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:15:42 np0005545273 podman[89202]: 2025-12-04 10:15:42.78547312 +0000 UTC m=+0.213642247 container init e0f7b31a26770fd361a8d4eef8b6873e88af2cfc15950b930c32d1f3d5300353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_bose, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:15:42 np0005545273 podman[89202]: 2025-12-04 10:15:42.793617219 +0000 UTC m=+0.221786296 container start e0f7b31a26770fd361a8d4eef8b6873e88af2cfc15950b930c32d1f3d5300353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec  4 05:15:42 np0005545273 compassionate_bose[89218]: 167 167
Dec  4 05:15:42 np0005545273 systemd[1]: libpod-e0f7b31a26770fd361a8d4eef8b6873e88af2cfc15950b930c32d1f3d5300353.scope: Deactivated successfully.
Dec  4 05:15:42 np0005545273 podman[89202]: 2025-12-04 10:15:42.819183412 +0000 UTC m=+0.247352489 container attach e0f7b31a26770fd361a8d4eef8b6873e88af2cfc15950b930c32d1f3d5300353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_bose, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec  4 05:15:42 np0005545273 podman[89202]: 2025-12-04 10:15:42.819861019 +0000 UTC m=+0.248030096 container died e0f7b31a26770fd361a8d4eef8b6873e88af2cfc15950b930c32d1f3d5300353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_bose, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec  4 05:15:42 np0005545273 systemd[1]: var-lib-containers-storage-overlay-9c6e877dded2435a8c33627001d01baed5facb4b72b0d88717d4fcbb32c88b86-merged.mount: Deactivated successfully.
Dec  4 05:15:42 np0005545273 podman[89202]: 2025-12-04 10:15:42.93255387 +0000 UTC m=+0.360722937 container remove e0f7b31a26770fd361a8d4eef8b6873e88af2cfc15950b930c32d1f3d5300353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_bose, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:15:42 np0005545273 systemd[1]: libpod-conmon-e0f7b31a26770fd361a8d4eef8b6873e88af2cfc15950b930c32d1f3d5300353.scope: Deactivated successfully.
Dec  4 05:15:42 np0005545273 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1490083487; not ready for session (expect reconnect)
Dec  4 05:15:42 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  4 05:15:42 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec  4 05:15:42 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  4 05:15:43 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Dec  4 05:15:43 np0005545273 ceph-mon[75358]: osd.1 [v2:192.168.122.100:6806/1594570567,v1:192.168.122.100:6807/1594570567] boot
Dec  4 05:15:43 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:43 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:43 np0005545273 podman[89242]: 2025-12-04 10:15:43.118419667 +0000 UTC m=+0.067233342 container create e2c69191fd83469d0c6063514b0b64f7636528ab94369275ec124f4eaf4354de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:15:43 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Dec  4 05:15:43 np0005545273 podman[89242]: 2025-12-04 10:15:43.080592114 +0000 UTC m=+0.029405819 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:15:43 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Dec  4 05:15:43 np0005545273 systemd[1]: Started libpod-conmon-e2c69191fd83469d0c6063514b0b64f7636528ab94369275ec124f4eaf4354de.scope.
Dec  4 05:15:43 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:15:43 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  4 05:15:43 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec  4 05:15:43 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  4 05:15:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 16 pg[1.0( empty local-lis/les=15/16 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 pi=[13,15)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:15:43 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccfb097a4a9cd8f44b712137b4b1a86a995ec84159c4a68a7d00e7a0ab67a4f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:43 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccfb097a4a9cd8f44b712137b4b1a86a995ec84159c4a68a7d00e7a0ab67a4f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:43 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccfb097a4a9cd8f44b712137b4b1a86a995ec84159c4a68a7d00e7a0ab67a4f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:43 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccfb097a4a9cd8f44b712137b4b1a86a995ec84159c4a68a7d00e7a0ab67a4f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:43 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:15:43 np0005545273 podman[89242]: 2025-12-04 10:15:43.256046636 +0000 UTC m=+0.204860331 container init e2c69191fd83469d0c6063514b0b64f7636528ab94369275ec124f4eaf4354de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:15:43 np0005545273 podman[89242]: 2025-12-04 10:15:43.264777229 +0000 UTC m=+0.213590894 container start e2c69191fd83469d0c6063514b0b64f7636528ab94369275ec124f4eaf4354de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:15:43 np0005545273 podman[89242]: 2025-12-04 10:15:43.309856679 +0000 UTC m=+0.258670324 container attach e2c69191fd83469d0c6063514b0b64f7636528ab94369275ec124f4eaf4354de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_roentgen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec  4 05:15:43 np0005545273 ceph-mgr[75651]: [devicehealth INFO root] creating main.db for devicehealth
Dec  4 05:15:43 np0005545273 ceph-mgr[75651]: [devicehealth INFO root] Check health
Dec  4 05:15:43 np0005545273 ceph-mgr[75651]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Dec  4 05:15:43 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec  4 05:15:43 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec  4 05:15:43 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  4 05:15:43 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]: [
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:    {
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:        "available": false,
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:        "being_replaced": false,
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:        "ceph_device_lvm": false,
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:        "lsm_data": {},
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:        "lvs": [],
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:        "path": "/dev/sr0",
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:        "rejected_reasons": [
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:            "Insufficient space (<5GB)",
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:            "Has a FileSystem"
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:        ],
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:        "sys_api": {
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:            "actuators": null,
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:            "device_nodes": [
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:                "sr0"
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:            ],
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:            "devname": "sr0",
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:            "human_readable_size": "482.00 KB",
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:            "id_bus": "ata",
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:            "model": "QEMU DVD-ROM",
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:            "nr_requests": "2",
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:            "parent": "/dev/sr0",
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:            "partitions": {},
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:            "path": "/dev/sr0",
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:            "removable": "1",
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:            "rev": "2.5+",
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:            "ro": "0",
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:            "rotational": "1",
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:            "sas_address": "",
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:            "sas_device_handle": "",
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:            "scheduler_mode": "mq-deadline",
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:            "sectors": 0,
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:            "sectorsize": "2048",
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:            "size": 493568.0,
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:            "support_discard": "2048",
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:            "type": "disk",
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:            "vendor": "QEMU"
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:        }
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]:    }
Dec  4 05:15:43 np0005545273 recursing_roentgen[89258]: ]
Dec  4 05:15:43 np0005545273 systemd[1]: libpod-e2c69191fd83469d0c6063514b0b64f7636528ab94369275ec124f4eaf4354de.scope: Deactivated successfully.
Dec  4 05:15:43 np0005545273 podman[89242]: 2025-12-04 10:15:43.835323317 +0000 UTC m=+0.784136992 container died e2c69191fd83469d0c6063514b0b64f7636528ab94369275ec124f4eaf4354de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_roentgen, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  4 05:15:43 np0005545273 systemd[1]: var-lib-containers-storage-overlay-ccfb097a4a9cd8f44b712137b4b1a86a995ec84159c4a68a7d00e7a0ab67a4f6-merged.mount: Deactivated successfully.
Dec  4 05:15:43 np0005545273 podman[89242]: 2025-12-04 10:15:43.931396661 +0000 UTC m=+0.880210346 container remove e2c69191fd83469d0c6063514b0b64f7636528ab94369275ec124f4eaf4354de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_roentgen, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:15:43 np0005545273 systemd[1]: libpod-conmon-e2c69191fd83469d0c6063514b0b64f7636528ab94369275ec124f4eaf4354de.scope: Deactivated successfully.
Dec  4 05:15:43 np0005545273 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1490083487; not ready for session (expect reconnect)
Dec  4 05:15:43 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  4 05:15:43 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec  4 05:15:43 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  4 05:15:43 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:15:43 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:43 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec  4 05:15:44 np0005545273 ceph-mgr[75651]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43690k
Dec  4 05:15:44 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43690k
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec  4 05:15:44 np0005545273 ceph-mgr[75651]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Dec  4 05:15:44 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e17 e17: 3 total, 2 up, 3 in
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 2 up, 3 in
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec  4 05:15:44 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  4 05:15:44 np0005545273 ceph-osd[88205]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 25.609 iops: 6555.894 elapsed_sec: 0.458
Dec  4 05:15:44 np0005545273 ceph-osd[88205]: log_channel(cluster) log [WRN] : OSD bench result of 6555.894056 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  4 05:15:44 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2[88201]: 2025-12-04T10:15:44.249+0000 7fd8a5a60640 -1 osd.2 0 waiting for initial osdmap
Dec  4 05:15:44 np0005545273 ceph-osd[88205]: osd.2 0 waiting for initial osdmap
Dec  4 05:15:44 np0005545273 ceph-osd[88205]: osd.2 17 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec  4 05:15:44 np0005545273 ceph-osd[88205]: osd.2 17 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Dec  4 05:15:44 np0005545273 ceph-osd[88205]: osd.2 17 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec  4 05:15:44 np0005545273 ceph-osd[88205]: osd.2 17 check_osdmap_features require_osd_release unknown -> tentacle
Dec  4 05:15:44 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-osd-2[88201]: 2025-12-04T10:15:44.277+0000 7fd8a0865640 -1 osd.2 17 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  4 05:15:44 np0005545273 ceph-osd[88205]: osd.2 17 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  4 05:15:44 np0005545273 ceph-osd[88205]: osd.2 17 set_numa_affinity not setting numa affinity
Dec  4 05:15:44 np0005545273 ceph-osd[88205]: osd.2 17 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial no unique device path for loop5: no symlink to loop5 in /dev/disk/by-path
Dec  4 05:15:44 np0005545273 podman[90136]: 2025-12-04 10:15:44.509945353 +0000 UTC m=+0.043120222 container create b36a27ba5c34af27c5e6049d3fbd617ea2903a13a0d7f1ebe048054150d21773 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0)
Dec  4 05:15:44 np0005545273 systemd[1]: Started libpod-conmon-b36a27ba5c34af27c5e6049d3fbd617ea2903a13a0d7f1ebe048054150d21773.scope.
Dec  4 05:15:44 np0005545273 podman[90136]: 2025-12-04 10:15:44.491247397 +0000 UTC m=+0.024422296 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:15:44 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:15:44 np0005545273 podman[90136]: 2025-12-04 10:15:44.607971676 +0000 UTC m=+0.141146545 container init b36a27ba5c34af27c5e6049d3fbd617ea2903a13a0d7f1ebe048054150d21773 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:15:44 np0005545273 podman[90136]: 2025-12-04 10:15:44.614995748 +0000 UTC m=+0.148170617 container start b36a27ba5c34af27c5e6049d3fbd617ea2903a13a0d7f1ebe048054150d21773 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  4 05:15:44 np0005545273 podman[90136]: 2025-12-04 10:15:44.619376075 +0000 UTC m=+0.152550944 container attach b36a27ba5c34af27c5e6049d3fbd617ea2903a13a0d7f1ebe048054150d21773 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_booth, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:15:44 np0005545273 kind_booth[90153]: 167 167
Dec  4 05:15:44 np0005545273 systemd[1]: libpod-b36a27ba5c34af27c5e6049d3fbd617ea2903a13a0d7f1ebe048054150d21773.scope: Deactivated successfully.
Dec  4 05:15:44 np0005545273 podman[90136]: 2025-12-04 10:15:44.62163692 +0000 UTC m=+0.154811789 container died b36a27ba5c34af27c5e6049d3fbd617ea2903a13a0d7f1ebe048054150d21773 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_booth, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  4 05:15:44 np0005545273 systemd[1]: var-lib-containers-storage-overlay-840a37dc867c9f8a89c31042f5f392fe92f928aeeb6dfd05cdffc431f56c12f1-merged.mount: Deactivated successfully.
Dec  4 05:15:44 np0005545273 podman[90136]: 2025-12-04 10:15:44.663223095 +0000 UTC m=+0.196397964 container remove b36a27ba5c34af27c5e6049d3fbd617ea2903a13a0d7f1ebe048054150d21773 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  4 05:15:44 np0005545273 systemd[1]: libpod-conmon-b36a27ba5c34af27c5e6049d3fbd617ea2903a13a0d7f1ebe048054150d21773.scope: Deactivated successfully.
Dec  4 05:15:44 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v47: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  4 05:15:44 np0005545273 podman[90178]: 2025-12-04 10:15:44.862380867 +0000 UTC m=+0.043697798 container create f56ff6f449d4d887b4f3ed2e3e3bd4887c75e5cf3065a11a0cdc830ba4147b84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_jackson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  4 05:15:44 np0005545273 systemd[1]: Started libpod-conmon-f56ff6f449d4d887b4f3ed2e3e3bd4887c75e5cf3065a11a0cdc830ba4147b84.scope.
Dec  4 05:15:44 np0005545273 podman[90178]: 2025-12-04 10:15:44.840869012 +0000 UTC m=+0.022185943 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:15:44 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:15:44 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44a76e9a0c672b4a9beb7e049f2c27f07f87cbbc81d33ccb588220d9d6eafa0b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:44 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44a76e9a0c672b4a9beb7e049f2c27f07f87cbbc81d33ccb588220d9d6eafa0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:44 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44a76e9a0c672b4a9beb7e049f2c27f07f87cbbc81d33ccb588220d9d6eafa0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:44 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44a76e9a0c672b4a9beb7e049f2c27f07f87cbbc81d33ccb588220d9d6eafa0b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:44 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44a76e9a0c672b4a9beb7e049f2c27f07f87cbbc81d33ccb588220d9d6eafa0b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:44 np0005545273 ceph-mgr[75651]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1490083487; not ready for session (expect reconnect)
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  4 05:15:44 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec  4 05:15:44 np0005545273 ceph-mgr[75651]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  4 05:15:44 np0005545273 podman[90178]: 2025-12-04 10:15:44.964486879 +0000 UTC m=+0.145803810 container init f56ff6f449d4d887b4f3ed2e3e3bd4887c75e5cf3065a11a0cdc830ba4147b84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  4 05:15:44 np0005545273 podman[90178]: 2025-12-04 10:15:44.97271618 +0000 UTC m=+0.154033081 container start f56ff6f449d4d887b4f3ed2e3e3bd4887c75e5cf3065a11a0cdc830ba4147b84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_jackson, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Dec  4 05:15:44 np0005545273 podman[90178]: 2025-12-04 10:15:44.976695287 +0000 UTC m=+0.158012198 container attach f56ff6f449d4d887b4f3ed2e3e3bd4887c75e5cf3065a11a0cdc830ba4147b84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_jackson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:15:45 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.iwufnj(active, since 79s)
Dec  4 05:15:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Dec  4 05:15:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Dec  4 05:15:45 np0005545273 ceph-mon[75358]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/1490083487,v1:192.168.122.100:6811/1490083487] boot
Dec  4 05:15:45 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Dec  4 05:15:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  4 05:15:45 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec  4 05:15:45 np0005545273 ceph-osd[88205]: osd.2 18 state: booting -> active
Dec  4 05:15:45 np0005545273 ceph-mon[75358]: Adjusting osd_memory_target on compute-0 to 43690k
Dec  4 05:15:45 np0005545273 ceph-mon[75358]: Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Dec  4 05:15:45 np0005545273 ceph-mon[75358]: OSD bench result of 6555.894056 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  4 05:15:45 np0005545273 recursing_jackson[90194]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:15:45 np0005545273 recursing_jackson[90194]: --> All data devices are unavailable
Dec  4 05:15:45 np0005545273 systemd[1]: libpod-f56ff6f449d4d887b4f3ed2e3e3bd4887c75e5cf3065a11a0cdc830ba4147b84.scope: Deactivated successfully.
Dec  4 05:15:45 np0005545273 podman[90178]: 2025-12-04 10:15:45.539448634 +0000 UTC m=+0.720765575 container died f56ff6f449d4d887b4f3ed2e3e3bd4887c75e5cf3065a11a0cdc830ba4147b84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  4 05:15:45 np0005545273 systemd[1]: var-lib-containers-storage-overlay-44a76e9a0c672b4a9beb7e049f2c27f07f87cbbc81d33ccb588220d9d6eafa0b-merged.mount: Deactivated successfully.
Dec  4 05:15:45 np0005545273 podman[90178]: 2025-12-04 10:15:45.592481669 +0000 UTC m=+0.773798570 container remove f56ff6f449d4d887b4f3ed2e3e3bd4887c75e5cf3065a11a0cdc830ba4147b84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_jackson, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:15:45 np0005545273 systemd[1]: libpod-conmon-f56ff6f449d4d887b4f3ed2e3e3bd4887c75e5cf3065a11a0cdc830ba4147b84.scope: Deactivated successfully.
Dec  4 05:15:46 np0005545273 podman[90287]: 2025-12-04 10:15:46.109400107 +0000 UTC m=+0.065617213 container create 69e5b79e26e02874a9e11350ea9e00d754f70f41a6e075b8061390f873bef015 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:15:46 np0005545273 systemd[1]: Started libpod-conmon-69e5b79e26e02874a9e11350ea9e00d754f70f41a6e075b8061390f873bef015.scope.
Dec  4 05:15:46 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:15:46 np0005545273 podman[90287]: 2025-12-04 10:15:46.083268288 +0000 UTC m=+0.039485474 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:15:46 np0005545273 podman[90287]: 2025-12-04 10:15:46.184718705 +0000 UTC m=+0.140935811 container init 69e5b79e26e02874a9e11350ea9e00d754f70f41a6e075b8061390f873bef015 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:15:46 np0005545273 podman[90287]: 2025-12-04 10:15:46.194441242 +0000 UTC m=+0.150658338 container start 69e5b79e26e02874a9e11350ea9e00d754f70f41a6e075b8061390f873bef015 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:15:46 np0005545273 podman[90287]: 2025-12-04 10:15:46.198716767 +0000 UTC m=+0.154933913 container attach 69e5b79e26e02874a9e11350ea9e00d754f70f41a6e075b8061390f873bef015 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:15:46 np0005545273 reverent_hopper[90303]: 167 167
Dec  4 05:15:46 np0005545273 systemd[1]: libpod-69e5b79e26e02874a9e11350ea9e00d754f70f41a6e075b8061390f873bef015.scope: Deactivated successfully.
Dec  4 05:15:46 np0005545273 conmon[90303]: conmon 69e5b79e26e02874a9e1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-69e5b79e26e02874a9e11350ea9e00d754f70f41a6e075b8061390f873bef015.scope/container/memory.events
Dec  4 05:15:46 np0005545273 podman[90287]: 2025-12-04 10:15:46.20214108 +0000 UTC m=+0.158358176 container died 69e5b79e26e02874a9e11350ea9e00d754f70f41a6e075b8061390f873bef015 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  4 05:15:46 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Dec  4 05:15:46 np0005545273 ceph-mon[75358]: osd.2 [v2:192.168.122.100:6810/1490083487,v1:192.168.122.100:6811/1490083487] boot
Dec  4 05:15:46 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Dec  4 05:15:46 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Dec  4 05:15:46 np0005545273 systemd[1]: var-lib-containers-storage-overlay-3a15d6ad0bfe531237d0f779665933e4439e0663ce48915e67d067665196afd3-merged.mount: Deactivated successfully.
Dec  4 05:15:46 np0005545273 podman[90287]: 2025-12-04 10:15:46.249501616 +0000 UTC m=+0.205718712 container remove 69e5b79e26e02874a9e11350ea9e00d754f70f41a6e075b8061390f873bef015 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  4 05:15:46 np0005545273 systemd[1]: libpod-conmon-69e5b79e26e02874a9e11350ea9e00d754f70f41a6e075b8061390f873bef015.scope: Deactivated successfully.
Dec  4 05:15:46 np0005545273 podman[90326]: 2025-12-04 10:15:46.413738605 +0000 UTC m=+0.048683709 container create 12bf3513e5940e564bee32f5b86a0740872e0f1d5a5bd2b14369a13155baaf8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_curie, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:15:46 np0005545273 systemd[1]: Started libpod-conmon-12bf3513e5940e564bee32f5b86a0740872e0f1d5a5bd2b14369a13155baaf8a.scope.
Dec  4 05:15:46 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:15:46 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6894112674c22042c2bb40464db234bd6a74bf95ed1021bee5954f6ffa517e32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:46 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6894112674c22042c2bb40464db234bd6a74bf95ed1021bee5954f6ffa517e32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:46 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6894112674c22042c2bb40464db234bd6a74bf95ed1021bee5954f6ffa517e32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:46 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6894112674c22042c2bb40464db234bd6a74bf95ed1021bee5954f6ffa517e32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:46 np0005545273 podman[90326]: 2025-12-04 10:15:46.39265218 +0000 UTC m=+0.027597304 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:15:46 np0005545273 podman[90326]: 2025-12-04 10:15:46.49339596 +0000 UTC m=+0.128341074 container init 12bf3513e5940e564bee32f5b86a0740872e0f1d5a5bd2b14369a13155baaf8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec  4 05:15:46 np0005545273 podman[90326]: 2025-12-04 10:15:46.500663317 +0000 UTC m=+0.135608421 container start 12bf3513e5940e564bee32f5b86a0740872e0f1d5a5bd2b14369a13155baaf8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_curie, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:15:46 np0005545273 podman[90326]: 2025-12-04 10:15:46.523642898 +0000 UTC m=+0.158588002 container attach 12bf3513e5940e564bee32f5b86a0740872e0f1d5a5bd2b14369a13155baaf8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_curie, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  4 05:15:46 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 creating+peering; 0 B data, 1.2 GiB used, 59 GiB / 60 GiB avail
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]: {
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:    "0": [
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:        {
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            "devices": [
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "/dev/loop3"
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            ],
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            "lv_name": "ceph_lv0",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            "lv_size": "21470642176",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            "name": "ceph_lv0",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            "tags": {
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.cluster_name": "ceph",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.crush_device_class": "",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.encrypted": "0",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.objectstore": "bluestore",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.osd_id": "0",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.type": "block",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.vdo": "0",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.with_tpm": "0"
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            },
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            "type": "block",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            "vg_name": "ceph_vg0"
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:        }
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:    ],
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:    "1": [
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:        {
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            "devices": [
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "/dev/loop4"
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            ],
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            "lv_name": "ceph_lv1",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            "lv_size": "21470642176",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            "name": "ceph_lv1",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            "tags": {
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.cluster_name": "ceph",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.crush_device_class": "",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.encrypted": "0",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.objectstore": "bluestore",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.osd_id": "1",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.type": "block",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.vdo": "0",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.with_tpm": "0"
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            },
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            "type": "block",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            "vg_name": "ceph_vg1"
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:        }
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:    ],
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:    "2": [
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:        {
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            "devices": [
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "/dev/loop5"
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            ],
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            "lv_name": "ceph_lv2",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            "lv_size": "21470642176",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            "name": "ceph_lv2",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            "tags": {
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.cluster_name": "ceph",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.crush_device_class": "",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.encrypted": "0",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.objectstore": "bluestore",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.osd_id": "2",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.type": "block",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.vdo": "0",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:                "ceph.with_tpm": "0"
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            },
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            "type": "block",
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:            "vg_name": "ceph_vg2"
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:        }
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]:    ]
Dec  4 05:15:46 np0005545273 vibrant_curie[90342]: }
Dec  4 05:15:46 np0005545273 systemd[1]: libpod-12bf3513e5940e564bee32f5b86a0740872e0f1d5a5bd2b14369a13155baaf8a.scope: Deactivated successfully.
Dec  4 05:15:46 np0005545273 podman[90326]: 2025-12-04 10:15:46.859068116 +0000 UTC m=+0.494013220 container died 12bf3513e5940e564bee32f5b86a0740872e0f1d5a5bd2b14369a13155baaf8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_curie, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec  4 05:15:46 np0005545273 systemd[1]: var-lib-containers-storage-overlay-6894112674c22042c2bb40464db234bd6a74bf95ed1021bee5954f6ffa517e32-merged.mount: Deactivated successfully.
Dec  4 05:15:46 np0005545273 podman[90326]: 2025-12-04 10:15:46.906647907 +0000 UTC m=+0.541593031 container remove 12bf3513e5940e564bee32f5b86a0740872e0f1d5a5bd2b14369a13155baaf8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_curie, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Dec  4 05:15:46 np0005545273 systemd[1]: libpod-conmon-12bf3513e5940e564bee32f5b86a0740872e0f1d5a5bd2b14369a13155baaf8a.scope: Deactivated successfully.
Dec  4 05:15:47 np0005545273 podman[90425]: 2025-12-04 10:15:47.427066891 +0000 UTC m=+0.048484455 container create 282463870eec6c242e8a7fb00d82f65b84bbfd80d981525d85e8991d1fc099a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_stonebraker, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec  4 05:15:47 np0005545273 systemd[1]: Started libpod-conmon-282463870eec6c242e8a7fb00d82f65b84bbfd80d981525d85e8991d1fc099a1.scope.
Dec  4 05:15:47 np0005545273 podman[90425]: 2025-12-04 10:15:47.405584097 +0000 UTC m=+0.027001681 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:15:47 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:15:47 np0005545273 podman[90425]: 2025-12-04 10:15:47.519586069 +0000 UTC m=+0.141003683 container init 282463870eec6c242e8a7fb00d82f65b84bbfd80d981525d85e8991d1fc099a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  4 05:15:47 np0005545273 podman[90425]: 2025-12-04 10:15:47.528950998 +0000 UTC m=+0.150368572 container start 282463870eec6c242e8a7fb00d82f65b84bbfd80d981525d85e8991d1fc099a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  4 05:15:47 np0005545273 podman[90425]: 2025-12-04 10:15:47.53353163 +0000 UTC m=+0.154949244 container attach 282463870eec6c242e8a7fb00d82f65b84bbfd80d981525d85e8991d1fc099a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_stonebraker, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  4 05:15:47 np0005545273 trusting_stonebraker[90441]: 167 167
Dec  4 05:15:47 np0005545273 systemd[1]: libpod-282463870eec6c242e8a7fb00d82f65b84bbfd80d981525d85e8991d1fc099a1.scope: Deactivated successfully.
Dec  4 05:15:47 np0005545273 podman[90425]: 2025-12-04 10:15:47.537868775 +0000 UTC m=+0.159286339 container died 282463870eec6c242e8a7fb00d82f65b84bbfd80d981525d85e8991d1fc099a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:15:47 np0005545273 systemd[1]: var-lib-containers-storage-overlay-c0fd7d232b1ebfaf3d2ddfa5d7ebb9ceeadb5c7856f9876fd40902c4d654ff4c-merged.mount: Deactivated successfully.
Dec  4 05:15:47 np0005545273 podman[90425]: 2025-12-04 10:15:47.578709592 +0000 UTC m=+0.200127156 container remove 282463870eec6c242e8a7fb00d82f65b84bbfd80d981525d85e8991d1fc099a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_stonebraker, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:15:47 np0005545273 systemd[1]: libpod-conmon-282463870eec6c242e8a7fb00d82f65b84bbfd80d981525d85e8991d1fc099a1.scope: Deactivated successfully.
Dec  4 05:15:47 np0005545273 podman[90464]: 2025-12-04 10:15:47.738552094 +0000 UTC m=+0.044143748 container create 0b9e51df596c5c680851fcaca70c5818fb1a67bacda26f662d2e8ea6ab866136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_napier, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec  4 05:15:47 np0005545273 systemd[1]: Started libpod-conmon-0b9e51df596c5c680851fcaca70c5818fb1a67bacda26f662d2e8ea6ab866136.scope.
Dec  4 05:15:47 np0005545273 podman[90464]: 2025-12-04 10:15:47.716913926 +0000 UTC m=+0.022505600 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:15:47 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:15:47 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2569d29a287f09c305045c855d6446e8349940e67fe33857560f50c765409f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:47 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2569d29a287f09c305045c855d6446e8349940e67fe33857560f50c765409f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:47 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2569d29a287f09c305045c855d6446e8349940e67fe33857560f50c765409f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:47 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2569d29a287f09c305045c855d6446e8349940e67fe33857560f50c765409f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:47 np0005545273 podman[90464]: 2025-12-04 10:15:47.846421367 +0000 UTC m=+0.152013041 container init 0b9e51df596c5c680851fcaca70c5818fb1a67bacda26f662d2e8ea6ab866136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_napier, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:15:47 np0005545273 podman[90464]: 2025-12-04 10:15:47.853067889 +0000 UTC m=+0.158659543 container start 0b9e51df596c5c680851fcaca70c5818fb1a67bacda26f662d2e8ea6ab866136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_napier, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:15:47 np0005545273 podman[90464]: 2025-12-04 10:15:47.857462546 +0000 UTC m=+0.163054230 container attach 0b9e51df596c5c680851fcaca70c5818fb1a67bacda26f662d2e8ea6ab866136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:15:48 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e19 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:15:48 np0005545273 lvm[90559]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:15:48 np0005545273 lvm[90558]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:15:48 np0005545273 lvm[90558]: VG ceph_vg0 finished
Dec  4 05:15:48 np0005545273 lvm[90559]: VG ceph_vg1 finished
Dec  4 05:15:48 np0005545273 lvm[90561]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:15:48 np0005545273 lvm[90561]: VG ceph_vg2 finished
Dec  4 05:15:48 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Dec  4 05:15:48 np0005545273 gifted_napier[90480]: {}
Dec  4 05:15:48 np0005545273 systemd[1]: libpod-0b9e51df596c5c680851fcaca70c5818fb1a67bacda26f662d2e8ea6ab866136.scope: Deactivated successfully.
Dec  4 05:15:48 np0005545273 podman[90464]: 2025-12-04 10:15:48.749587613 +0000 UTC m=+1.055179267 container died 0b9e51df596c5c680851fcaca70c5818fb1a67bacda26f662d2e8ea6ab866136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_napier, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:15:48 np0005545273 systemd[1]: libpod-0b9e51df596c5c680851fcaca70c5818fb1a67bacda26f662d2e8ea6ab866136.scope: Consumed 1.448s CPU time.
Dec  4 05:15:48 np0005545273 systemd[1]: var-lib-containers-storage-overlay-f2569d29a287f09c305045c855d6446e8349940e67fe33857560f50c765409f0-merged.mount: Deactivated successfully.
Dec  4 05:15:48 np0005545273 podman[90464]: 2025-12-04 10:15:48.80149521 +0000 UTC m=+1.107086864 container remove 0b9e51df596c5c680851fcaca70c5818fb1a67bacda26f662d2e8ea6ab866136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_napier, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:15:48 np0005545273 systemd[1]: libpod-conmon-0b9e51df596c5c680851fcaca70c5818fb1a67bacda26f662d2e8ea6ab866136.scope: Deactivated successfully.
Dec  4 05:15:48 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:15:48 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:48 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:15:48 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:49 np0005545273 podman[90694]: 2025-12-04 10:15:49.564147456 +0000 UTC m=+0.074517550 container exec 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:15:49 np0005545273 podman[90694]: 2025-12-04 10:15:49.676725815 +0000 UTC m=+0.187095848 container exec_died 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:15:49 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:49 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:50 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Dec  4 05:15:50 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:15:50 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:50 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:15:50 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:51 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:15:51 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:15:51 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:15:51 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:15:51 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:15:51 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:51 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:15:51 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:15:51 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:15:51 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:15:51 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:15:51 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:15:51 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:51 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:51 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:15:51 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:51 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:15:52 np0005545273 podman[90989]: 2025-12-04 10:15:51.95755503 +0000 UTC m=+0.025239317 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:15:52 np0005545273 podman[90989]: 2025-12-04 10:15:52.080267354 +0000 UTC m=+0.147951641 container create d36d8c6b7277e99b3f9c5d308a1018262a011e96b4212960613f2694742245a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_knuth, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec  4 05:15:52 np0005545273 systemd[1]: Started libpod-conmon-d36d8c6b7277e99b3f9c5d308a1018262a011e96b4212960613f2694742245a2.scope.
Dec  4 05:15:52 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:15:52 np0005545273 podman[90989]: 2025-12-04 10:15:52.268000107 +0000 UTC m=+0.335684414 container init d36d8c6b7277e99b3f9c5d308a1018262a011e96b4212960613f2694742245a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_knuth, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:15:52 np0005545273 podman[90989]: 2025-12-04 10:15:52.276109325 +0000 UTC m=+0.343793612 container start d36d8c6b7277e99b3f9c5d308a1018262a011e96b4212960613f2694742245a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_knuth, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:15:52 np0005545273 vigilant_knuth[91005]: 167 167
Dec  4 05:15:52 np0005545273 systemd[1]: libpod-d36d8c6b7277e99b3f9c5d308a1018262a011e96b4212960613f2694742245a2.scope: Deactivated successfully.
Dec  4 05:15:52 np0005545273 podman[90989]: 2025-12-04 10:15:52.292323571 +0000 UTC m=+0.360007948 container attach d36d8c6b7277e99b3f9c5d308a1018262a011e96b4212960613f2694742245a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_knuth, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3)
Dec  4 05:15:52 np0005545273 podman[90989]: 2025-12-04 10:15:52.292901805 +0000 UTC m=+0.360586092 container died d36d8c6b7277e99b3f9c5d308a1018262a011e96b4212960613f2694742245a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_knuth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:15:52 np0005545273 systemd[1]: var-lib-containers-storage-overlay-668011341d4c2a44eae23abf5ae6eeeb8f8f56f4752f56304a2c89a0e992ee75-merged.mount: Deactivated successfully.
Dec  4 05:15:52 np0005545273 podman[90989]: 2025-12-04 10:15:52.332687917 +0000 UTC m=+0.400372204 container remove d36d8c6b7277e99b3f9c5d308a1018262a011e96b4212960613f2694742245a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_knuth, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:15:52 np0005545273 systemd[1]: libpod-conmon-d36d8c6b7277e99b3f9c5d308a1018262a011e96b4212960613f2694742245a2.scope: Deactivated successfully.
Dec  4 05:15:52 np0005545273 podman[91030]: 2025-12-04 10:15:52.496184977 +0000 UTC m=+0.051764445 container create 8fe362b98fad0f95ddf746a8dc81feac2eaee2e0a22bbbe3b05e4ea16905caca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_heisenberg, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec  4 05:15:52 np0005545273 systemd[1]: Started libpod-conmon-8fe362b98fad0f95ddf746a8dc81feac2eaee2e0a22bbbe3b05e4ea16905caca.scope.
Dec  4 05:15:52 np0005545273 podman[91030]: 2025-12-04 10:15:52.471909465 +0000 UTC m=+0.027488973 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:15:52 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:15:52 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5af5325964de0c0a4a4962ad95228dc12ba47c30eeba71419ae7418ef4bad0cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:52 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5af5325964de0c0a4a4962ad95228dc12ba47c30eeba71419ae7418ef4bad0cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:52 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5af5325964de0c0a4a4962ad95228dc12ba47c30eeba71419ae7418ef4bad0cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:52 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5af5325964de0c0a4a4962ad95228dc12ba47c30eeba71419ae7418ef4bad0cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:52 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5af5325964de0c0a4a4962ad95228dc12ba47c30eeba71419ae7418ef4bad0cc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:52 np0005545273 podman[91030]: 2025-12-04 10:15:52.610571269 +0000 UTC m=+0.166150787 container init 8fe362b98fad0f95ddf746a8dc81feac2eaee2e0a22bbbe3b05e4ea16905caca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_heisenberg, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:15:52 np0005545273 podman[91030]: 2025-12-04 10:15:52.618977244 +0000 UTC m=+0.174556732 container start 8fe362b98fad0f95ddf746a8dc81feac2eaee2e0a22bbbe3b05e4ea16905caca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec  4 05:15:52 np0005545273 podman[91030]: 2025-12-04 10:15:52.62250443 +0000 UTC m=+0.178083898 container attach 8fe362b98fad0f95ddf746a8dc81feac2eaee2e0a22bbbe3b05e4ea16905caca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  4 05:15:52 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:15:53 np0005545273 youthful_heisenberg[91046]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:15:53 np0005545273 youthful_heisenberg[91046]: --> All data devices are unavailable
Dec  4 05:15:53 np0005545273 systemd[1]: libpod-8fe362b98fad0f95ddf746a8dc81feac2eaee2e0a22bbbe3b05e4ea16905caca.scope: Deactivated successfully.
Dec  4 05:15:53 np0005545273 podman[91030]: 2025-12-04 10:15:53.184078449 +0000 UTC m=+0.739657937 container died 8fe362b98fad0f95ddf746a8dc81feac2eaee2e0a22bbbe3b05e4ea16905caca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_heisenberg, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec  4 05:15:53 np0005545273 systemd[1]: var-lib-containers-storage-overlay-5af5325964de0c0a4a4962ad95228dc12ba47c30eeba71419ae7418ef4bad0cc-merged.mount: Deactivated successfully.
Dec  4 05:15:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e19 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:15:53 np0005545273 podman[91030]: 2025-12-04 10:15:53.694684772 +0000 UTC m=+1.250264280 container remove 8fe362b98fad0f95ddf746a8dc81feac2eaee2e0a22bbbe3b05e4ea16905caca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec  4 05:15:53 np0005545273 systemd[1]: libpod-conmon-8fe362b98fad0f95ddf746a8dc81feac2eaee2e0a22bbbe3b05e4ea16905caca.scope: Deactivated successfully.
Dec  4 05:15:54 np0005545273 podman[91138]: 2025-12-04 10:15:54.260671757 +0000 UTC m=+0.063223364 container create faa9575ea477e28f6dd7ce4aab39712f452d09e878d0e5f514eb5b294503af2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mirzakhani, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec  4 05:15:54 np0005545273 systemd[1]: Started libpod-conmon-faa9575ea477e28f6dd7ce4aab39712f452d09e878d0e5f514eb5b294503af2c.scope.
Dec  4 05:15:54 np0005545273 podman[91138]: 2025-12-04 10:15:54.219974554 +0000 UTC m=+0.022526181 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:15:54 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:15:54 np0005545273 podman[91138]: 2025-12-04 10:15:54.348601194 +0000 UTC m=+0.151152821 container init faa9575ea477e28f6dd7ce4aab39712f452d09e878d0e5f514eb5b294503af2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mirzakhani, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:15:54 np0005545273 podman[91138]: 2025-12-04 10:15:54.354629212 +0000 UTC m=+0.157180809 container start faa9575ea477e28f6dd7ce4aab39712f452d09e878d0e5f514eb5b294503af2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:15:54 np0005545273 inspiring_mirzakhani[91154]: 167 167
Dec  4 05:15:54 np0005545273 podman[91138]: 2025-12-04 10:15:54.358954317 +0000 UTC m=+0.161506014 container attach faa9575ea477e28f6dd7ce4aab39712f452d09e878d0e5f514eb5b294503af2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec  4 05:15:54 np0005545273 systemd[1]: libpod-faa9575ea477e28f6dd7ce4aab39712f452d09e878d0e5f514eb5b294503af2c.scope: Deactivated successfully.
Dec  4 05:15:54 np0005545273 podman[91138]: 2025-12-04 10:15:54.360195368 +0000 UTC m=+0.162746965 container died faa9575ea477e28f6dd7ce4aab39712f452d09e878d0e5f514eb5b294503af2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True)
Dec  4 05:15:54 np0005545273 systemd[1]: var-lib-containers-storage-overlay-b8f3d90943f70c6ebce3ac0c2a40b23c174f1f0d93ab7c9f97b1a7e093b18060-merged.mount: Deactivated successfully.
Dec  4 05:15:54 np0005545273 podman[91138]: 2025-12-04 10:15:54.400257575 +0000 UTC m=+0.202809202 container remove faa9575ea477e28f6dd7ce4aab39712f452d09e878d0e5f514eb5b294503af2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mirzakhani, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:15:54 np0005545273 systemd[1]: libpod-conmon-faa9575ea477e28f6dd7ce4aab39712f452d09e878d0e5f514eb5b294503af2c.scope: Deactivated successfully.
Dec  4 05:15:54 np0005545273 podman[91177]: 2025-12-04 10:15:54.606253583 +0000 UTC m=+0.057766951 container create 12400f9f389e722fc8a7668ae9febe3eff5816b8b751965a2ca7ae04d666c2fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_pasteur, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  4 05:15:54 np0005545273 systemd[1]: Started libpod-conmon-12400f9f389e722fc8a7668ae9febe3eff5816b8b751965a2ca7ae04d666c2fc.scope.
Dec  4 05:15:54 np0005545273 podman[91177]: 2025-12-04 10:15:54.580417373 +0000 UTC m=+0.031930781 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:15:54 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:15:54 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:15:54 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/220544e8604a427ab6c7ffddeba3113a8f2f2afb992f20a9da65eeb3ff1f77a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:54 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/220544e8604a427ab6c7ffddeba3113a8f2f2afb992f20a9da65eeb3ff1f77a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:54 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/220544e8604a427ab6c7ffddeba3113a8f2f2afb992f20a9da65eeb3ff1f77a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:54 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/220544e8604a427ab6c7ffddeba3113a8f2f2afb992f20a9da65eeb3ff1f77a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:54 np0005545273 podman[91177]: 2025-12-04 10:15:54.709599876 +0000 UTC m=+0.161113244 container init 12400f9f389e722fc8a7668ae9febe3eff5816b8b751965a2ca7ae04d666c2fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_pasteur, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:15:54 np0005545273 podman[91177]: 2025-12-04 10:15:54.728380465 +0000 UTC m=+0.179893823 container start 12400f9f389e722fc8a7668ae9febe3eff5816b8b751965a2ca7ae04d666c2fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  4 05:15:54 np0005545273 podman[91177]: 2025-12-04 10:15:54.733250013 +0000 UTC m=+0.184763381 container attach 12400f9f389e722fc8a7668ae9febe3eff5816b8b751965a2ca7ae04d666c2fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_pasteur, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]: {
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:    "0": [
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:        {
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            "devices": [
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "/dev/loop3"
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            ],
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            "lv_name": "ceph_lv0",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            "lv_size": "21470642176",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            "name": "ceph_lv0",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            "tags": {
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.cluster_name": "ceph",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.crush_device_class": "",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.encrypted": "0",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.objectstore": "bluestore",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.osd_id": "0",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.type": "block",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.vdo": "0",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.with_tpm": "0"
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            },
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            "type": "block",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            "vg_name": "ceph_vg0"
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:        }
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:    ],
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:    "1": [
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:        {
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            "devices": [
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "/dev/loop4"
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            ],
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            "lv_name": "ceph_lv1",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            "lv_size": "21470642176",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            "name": "ceph_lv1",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            "tags": {
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.cluster_name": "ceph",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.crush_device_class": "",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.encrypted": "0",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.objectstore": "bluestore",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.osd_id": "1",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.type": "block",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.vdo": "0",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.with_tpm": "0"
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            },
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            "type": "block",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            "vg_name": "ceph_vg1"
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:        }
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:    ],
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:    "2": [
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:        {
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            "devices": [
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "/dev/loop5"
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            ],
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            "lv_name": "ceph_lv2",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            "lv_size": "21470642176",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            "name": "ceph_lv2",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            "tags": {
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.cluster_name": "ceph",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.crush_device_class": "",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.encrypted": "0",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.objectstore": "bluestore",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.osd_id": "2",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.type": "block",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.vdo": "0",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:                "ceph.with_tpm": "0"
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            },
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            "type": "block",
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:            "vg_name": "ceph_vg2"
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:        }
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]:    ]
Dec  4 05:15:55 np0005545273 upbeat_pasteur[91195]: }
Dec  4 05:15:55 np0005545273 systemd[1]: libpod-12400f9f389e722fc8a7668ae9febe3eff5816b8b751965a2ca7ae04d666c2fc.scope: Deactivated successfully.
Dec  4 05:15:55 np0005545273 podman[91177]: 2025-12-04 10:15:55.045658529 +0000 UTC m=+0.497171967 container died 12400f9f389e722fc8a7668ae9febe3eff5816b8b751965a2ca7ae04d666c2fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_pasteur, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:15:55 np0005545273 systemd[1]: var-lib-containers-storage-overlay-220544e8604a427ab6c7ffddeba3113a8f2f2afb992f20a9da65eeb3ff1f77a5-merged.mount: Deactivated successfully.
Dec  4 05:15:55 np0005545273 podman[91177]: 2025-12-04 10:15:55.102395265 +0000 UTC m=+0.553908653 container remove 12400f9f389e722fc8a7668ae9febe3eff5816b8b751965a2ca7ae04d666c2fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_pasteur, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:15:55 np0005545273 systemd[1]: libpod-conmon-12400f9f389e722fc8a7668ae9febe3eff5816b8b751965a2ca7ae04d666c2fc.scope: Deactivated successfully.
Dec  4 05:15:55 np0005545273 podman[91277]: 2025-12-04 10:15:55.635179389 +0000 UTC m=+0.048424232 container create ca1841aebf890cadb96d5edf8e0f40299a2bd17aabc9dea17f979aae78d77dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_nash, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:15:55 np0005545273 systemd[1]: Started libpod-conmon-ca1841aebf890cadb96d5edf8e0f40299a2bd17aabc9dea17f979aae78d77dc8.scope.
Dec  4 05:15:55 np0005545273 podman[91277]: 2025-12-04 10:15:55.612810303 +0000 UTC m=+0.026055136 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:15:55 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:15:55 np0005545273 podman[91277]: 2025-12-04 10:15:55.730328662 +0000 UTC m=+0.143573555 container init ca1841aebf890cadb96d5edf8e0f40299a2bd17aabc9dea17f979aae78d77dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_nash, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec  4 05:15:55 np0005545273 podman[91277]: 2025-12-04 10:15:55.740378437 +0000 UTC m=+0.153623260 container start ca1841aebf890cadb96d5edf8e0f40299a2bd17aabc9dea17f979aae78d77dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec  4 05:15:55 np0005545273 podman[91277]: 2025-12-04 10:15:55.745028741 +0000 UTC m=+0.158273644 container attach ca1841aebf890cadb96d5edf8e0f40299a2bd17aabc9dea17f979aae78d77dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_nash, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec  4 05:15:55 np0005545273 upbeat_nash[91293]: 167 167
Dec  4 05:15:55 np0005545273 systemd[1]: libpod-ca1841aebf890cadb96d5edf8e0f40299a2bd17aabc9dea17f979aae78d77dc8.scope: Deactivated successfully.
Dec  4 05:15:55 np0005545273 podman[91277]: 2025-12-04 10:15:55.74907166 +0000 UTC m=+0.162316483 container died ca1841aebf890cadb96d5edf8e0f40299a2bd17aabc9dea17f979aae78d77dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_nash, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:15:55 np0005545273 systemd[1]: var-lib-containers-storage-overlay-67a5e3c467735141c6ac7fc836ee6ce1701c906d70e15bcad126ae5740eb91a2-merged.mount: Deactivated successfully.
Dec  4 05:15:55 np0005545273 podman[91277]: 2025-12-04 10:15:55.791521036 +0000 UTC m=+0.204765849 container remove ca1841aebf890cadb96d5edf8e0f40299a2bd17aabc9dea17f979aae78d77dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_nash, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec  4 05:15:55 np0005545273 systemd[1]: libpod-conmon-ca1841aebf890cadb96d5edf8e0f40299a2bd17aabc9dea17f979aae78d77dc8.scope: Deactivated successfully.
Dec  4 05:15:55 np0005545273 podman[91318]: 2025-12-04 10:15:55.986777402 +0000 UTC m=+0.050063753 container create 64419f2ec0fa5c5a3e36518dac4432e3802bf6b1658bc588492a0fe591a8cb9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_lehmann, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec  4 05:15:56 np0005545273 systemd[1]: Started libpod-conmon-64419f2ec0fa5c5a3e36518dac4432e3802bf6b1658bc588492a0fe591a8cb9d.scope.
Dec  4 05:15:56 np0005545273 podman[91318]: 2025-12-04 10:15:55.964240952 +0000 UTC m=+0.027527303 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:15:56 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:15:56 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ee256a36900f4a91b6aa186bd7543349da5db8e5e11cd164c6af0d41f0937b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:56 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ee256a36900f4a91b6aa186bd7543349da5db8e5e11cd164c6af0d41f0937b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:56 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ee256a36900f4a91b6aa186bd7543349da5db8e5e11cd164c6af0d41f0937b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:56 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ee256a36900f4a91b6aa186bd7543349da5db8e5e11cd164c6af0d41f0937b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:15:56 np0005545273 podman[91318]: 2025-12-04 10:15:56.075829786 +0000 UTC m=+0.139116127 container init 64419f2ec0fa5c5a3e36518dac4432e3802bf6b1658bc588492a0fe591a8cb9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_lehmann, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  4 05:15:56 np0005545273 podman[91318]: 2025-12-04 10:15:56.082929709 +0000 UTC m=+0.146216030 container start 64419f2ec0fa5c5a3e36518dac4432e3802bf6b1658bc588492a0fe591a8cb9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_lehmann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec  4 05:15:56 np0005545273 podman[91318]: 2025-12-04 10:15:56.087256425 +0000 UTC m=+0.150542746 container attach 64419f2ec0fa5c5a3e36518dac4432e3802bf6b1658bc588492a0fe591a8cb9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:15:56 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:15:56 np0005545273 lvm[91412]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:15:56 np0005545273 lvm[91412]: VG ceph_vg0 finished
Dec  4 05:15:56 np0005545273 lvm[91414]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:15:56 np0005545273 lvm[91414]: VG ceph_vg1 finished
Dec  4 05:15:56 np0005545273 lvm[91416]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:15:56 np0005545273 lvm[91416]: VG ceph_vg2 finished
Dec  4 05:15:56 np0005545273 goofy_lehmann[91335]: {}
Dec  4 05:15:56 np0005545273 systemd[1]: libpod-64419f2ec0fa5c5a3e36518dac4432e3802bf6b1658bc588492a0fe591a8cb9d.scope: Deactivated successfully.
Dec  4 05:15:56 np0005545273 systemd[1]: libpod-64419f2ec0fa5c5a3e36518dac4432e3802bf6b1658bc588492a0fe591a8cb9d.scope: Consumed 1.410s CPU time.
Dec  4 05:15:56 np0005545273 podman[91318]: 2025-12-04 10:15:56.934217329 +0000 UTC m=+0.997503660 container died 64419f2ec0fa5c5a3e36518dac4432e3802bf6b1658bc588492a0fe591a8cb9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:15:56 np0005545273 systemd[1]: var-lib-containers-storage-overlay-1ee256a36900f4a91b6aa186bd7543349da5db8e5e11cd164c6af0d41f0937b7-merged.mount: Deactivated successfully.
Dec  4 05:15:56 np0005545273 podman[91318]: 2025-12-04 10:15:56.981598856 +0000 UTC m=+1.044885197 container remove 64419f2ec0fa5c5a3e36518dac4432e3802bf6b1658bc588492a0fe591a8cb9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_lehmann, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Dec  4 05:15:56 np0005545273 systemd[1]: libpod-conmon-64419f2ec0fa5c5a3e36518dac4432e3802bf6b1658bc588492a0fe591a8cb9d.scope: Deactivated successfully.
Dec  4 05:15:57 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:15:57 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:57 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:15:57 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:57 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:57 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:15:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:15:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:15:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:15:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:15:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:15:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:15:58 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e19 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:15:58 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:16:00 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:16:01 np0005545273 python3[91481]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:16:01 np0005545273 podman[91483]: 2025-12-04 10:16:01.669784904 +0000 UTC m=+0.066567096 container create 5b677eb13b4371da0d693f7b09b0b385b0c064182441ee99820672a9bf43e710 (image=quay.io/ceph/ceph:v20, name=suspicious_bose, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:16:01 np0005545273 systemd[1]: Started libpod-conmon-5b677eb13b4371da0d693f7b09b0b385b0c064182441ee99820672a9bf43e710.scope.
Dec  4 05:16:01 np0005545273 podman[91483]: 2025-12-04 10:16:01.647465459 +0000 UTC m=+0.044247651 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:16:01 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:01 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d4c54cc9d984da6226e5c944c3d2966b4f9c6d88e9a3d0be98d5003c357ef6e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:01 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d4c54cc9d984da6226e5c944c3d2966b4f9c6d88e9a3d0be98d5003c357ef6e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:01 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d4c54cc9d984da6226e5c944c3d2966b4f9c6d88e9a3d0be98d5003c357ef6e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:01 np0005545273 podman[91483]: 2025-12-04 10:16:01.77859199 +0000 UTC m=+0.175374192 container init 5b677eb13b4371da0d693f7b09b0b385b0c064182441ee99820672a9bf43e710 (image=quay.io/ceph/ceph:v20, name=suspicious_bose, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  4 05:16:01 np0005545273 podman[91483]: 2025-12-04 10:16:01.78720748 +0000 UTC m=+0.183989652 container start 5b677eb13b4371da0d693f7b09b0b385b0c064182441ee99820672a9bf43e710 (image=quay.io/ceph/ceph:v20, name=suspicious_bose, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:16:01 np0005545273 podman[91483]: 2025-12-04 10:16:01.791007013 +0000 UTC m=+0.187789215 container attach 5b677eb13b4371da0d693f7b09b0b385b0c064182441ee99820672a9bf43e710 (image=quay.io/ceph/ceph:v20, name=suspicious_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec  4 05:16:02 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec  4 05:16:02 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1115381984' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec  4 05:16:02 np0005545273 suspicious_bose[91500]: 
Dec  4 05:16:02 np0005545273 suspicious_bose[91500]: {"fsid":"f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":115,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":19,"num_osds":3,"num_up_osds":3,"osd_up_since":1764843345,"num_in_osds":3,"osd_in_since":1764843314,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":502874112,"bytes_avail":63909052416,"bytes_total":64411926528},"fsmap":{"epoch":1,"btime":"2025-12-04T10:14:03:532003+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-04T10:15:28.674444+0000","services":{}},"progress_events":{}}
Dec  4 05:16:02 np0005545273 systemd[1]: libpod-5b677eb13b4371da0d693f7b09b0b385b0c064182441ee99820672a9bf43e710.scope: Deactivated successfully.
Dec  4 05:16:02 np0005545273 podman[91483]: 2025-12-04 10:16:02.353053372 +0000 UTC m=+0.749835574 container died 5b677eb13b4371da0d693f7b09b0b385b0c064182441ee99820672a9bf43e710 (image=quay.io/ceph/ceph:v20, name=suspicious_bose, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:16:02 np0005545273 systemd[1]: var-lib-containers-storage-overlay-2d4c54cc9d984da6226e5c944c3d2966b4f9c6d88e9a3d0be98d5003c357ef6e-merged.mount: Deactivated successfully.
Dec  4 05:16:02 np0005545273 podman[91483]: 2025-12-04 10:16:02.399342082 +0000 UTC m=+0.796124244 container remove 5b677eb13b4371da0d693f7b09b0b385b0c064182441ee99820672a9bf43e710 (image=quay.io/ceph/ceph:v20, name=suspicious_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec  4 05:16:02 np0005545273 systemd[1]: libpod-conmon-5b677eb13b4371da0d693f7b09b0b385b0c064182441ee99820672a9bf43e710.scope: Deactivated successfully.
Dec  4 05:16:02 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:16:02 np0005545273 python3[91563]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:16:02 np0005545273 podman[91564]: 2025-12-04 10:16:02.986072064 +0000 UTC m=+0.046619269 container create 963fcdb4b1055e563fa3ba406b3e5db65a22cbd4f953fa7dbf66ef963d9a88c9 (image=quay.io/ceph/ceph:v20, name=optimistic_zhukovsky, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  4 05:16:03 np0005545273 systemd[1]: Started libpod-conmon-963fcdb4b1055e563fa3ba406b3e5db65a22cbd4f953fa7dbf66ef963d9a88c9.scope.
Dec  4 05:16:03 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:03 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81ab6466cb1f16c6cacd1655ba3a44442d5619aafd81006ddb7a402549104efa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:03 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81ab6466cb1f16c6cacd1655ba3a44442d5619aafd81006ddb7a402549104efa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:03 np0005545273 podman[91564]: 2025-12-04 10:16:03.058510542 +0000 UTC m=+0.119057767 container init 963fcdb4b1055e563fa3ba406b3e5db65a22cbd4f953fa7dbf66ef963d9a88c9 (image=quay.io/ceph/ceph:v20, name=optimistic_zhukovsky, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:16:03 np0005545273 podman[91564]: 2025-12-04 10:16:02.968335412 +0000 UTC m=+0.028882647 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:16:03 np0005545273 podman[91564]: 2025-12-04 10:16:03.06700623 +0000 UTC m=+0.127553435 container start 963fcdb4b1055e563fa3ba406b3e5db65a22cbd4f953fa7dbf66ef963d9a88c9 (image=quay.io/ceph/ceph:v20, name=optimistic_zhukovsky, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec  4 05:16:03 np0005545273 podman[91564]: 2025-12-04 10:16:03.071371386 +0000 UTC m=+0.131918591 container attach 963fcdb4b1055e563fa3ba406b3e5db65a22cbd4f953fa7dbf66ef963d9a88c9 (image=quay.io/ceph/ceph:v20, name=optimistic_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:16:03 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e19 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:16:03 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  4 05:16:03 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1204202594' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec  4 05:16:03 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Dec  4 05:16:03 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/1204202594' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec  4 05:16:03 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1204202594' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  4 05:16:03 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Dec  4 05:16:03 np0005545273 optimistic_zhukovsky[91580]: pool 'vms' created
Dec  4 05:16:03 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Dec  4 05:16:03 np0005545273 systemd[1]: libpod-963fcdb4b1055e563fa3ba406b3e5db65a22cbd4f953fa7dbf66ef963d9a88c9.scope: Deactivated successfully.
Dec  4 05:16:03 np0005545273 podman[91564]: 2025-12-04 10:16:03.850661119 +0000 UTC m=+0.911208344 container died 963fcdb4b1055e563fa3ba406b3e5db65a22cbd4f953fa7dbf66ef963d9a88c9 (image=quay.io/ceph/ceph:v20, name=optimistic_zhukovsky, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Dec  4 05:16:03 np0005545273 systemd[1]: var-lib-containers-storage-overlay-81ab6466cb1f16c6cacd1655ba3a44442d5619aafd81006ddb7a402549104efa-merged.mount: Deactivated successfully.
Dec  4 05:16:03 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 20 pg[2.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [2] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:03 np0005545273 podman[91564]: 2025-12-04 10:16:03.896983769 +0000 UTC m=+0.957530974 container remove 963fcdb4b1055e563fa3ba406b3e5db65a22cbd4f953fa7dbf66ef963d9a88c9 (image=quay.io/ceph/ceph:v20, name=optimistic_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:16:03 np0005545273 systemd[1]: libpod-conmon-963fcdb4b1055e563fa3ba406b3e5db65a22cbd4f953fa7dbf66ef963d9a88c9.scope: Deactivated successfully.
Dec  4 05:16:04 np0005545273 python3[91645]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:16:04 np0005545273 podman[91646]: 2025-12-04 10:16:04.258695709 +0000 UTC m=+0.054741278 container create 8067e45b92dee0156ff0e89ad88289fa63fe1c7a012617d0734f529e7582de61 (image=quay.io/ceph/ceph:v20, name=keen_euler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:16:04 np0005545273 systemd[1]: Started libpod-conmon-8067e45b92dee0156ff0e89ad88289fa63fe1c7a012617d0734f529e7582de61.scope.
Dec  4 05:16:04 np0005545273 podman[91646]: 2025-12-04 10:16:04.230565032 +0000 UTC m=+0.026610681 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:16:04 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:04 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f153db15e4e169599e0a5911d0cd53a29b42517a395f400b72d0848091e158ec/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:04 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f153db15e4e169599e0a5911d0cd53a29b42517a395f400b72d0848091e158ec/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:04 np0005545273 podman[91646]: 2025-12-04 10:16:04.365396383 +0000 UTC m=+0.161441972 container init 8067e45b92dee0156ff0e89ad88289fa63fe1c7a012617d0734f529e7582de61 (image=quay.io/ceph/ceph:v20, name=keen_euler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Dec  4 05:16:04 np0005545273 podman[91646]: 2025-12-04 10:16:04.375346307 +0000 UTC m=+0.171391916 container start 8067e45b92dee0156ff0e89ad88289fa63fe1c7a012617d0734f529e7582de61 (image=quay.io/ceph/ceph:v20, name=keen_euler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec  4 05:16:04 np0005545273 podman[91646]: 2025-12-04 10:16:04.380294797 +0000 UTC m=+0.176340386 container attach 8067e45b92dee0156ff0e89ad88289fa63fe1c7a012617d0734f529e7582de61 (image=quay.io/ceph/ceph:v20, name=keen_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:16:04 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v60: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:16:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  4 05:16:04 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1471047535' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec  4 05:16:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Dec  4 05:16:04 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/1204202594' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  4 05:16:04 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/1471047535' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec  4 05:16:04 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1471047535' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  4 05:16:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Dec  4 05:16:04 np0005545273 keen_euler[91662]: pool 'volumes' created
Dec  4 05:16:04 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Dec  4 05:16:04 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 21 pg[2.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [2] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:04 np0005545273 systemd[1]: libpod-8067e45b92dee0156ff0e89ad88289fa63fe1c7a012617d0734f529e7582de61.scope: Deactivated successfully.
Dec  4 05:16:04 np0005545273 podman[91646]: 2025-12-04 10:16:04.863876391 +0000 UTC m=+0.659921960 container died 8067e45b92dee0156ff0e89ad88289fa63fe1c7a012617d0734f529e7582de61 (image=quay.io/ceph/ceph:v20, name=keen_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec  4 05:16:04 np0005545273 systemd[1]: var-lib-containers-storage-overlay-f153db15e4e169599e0a5911d0cd53a29b42517a395f400b72d0848091e158ec-merged.mount: Deactivated successfully.
Dec  4 05:16:04 np0005545273 podman[91646]: 2025-12-04 10:16:04.908604673 +0000 UTC m=+0.704650252 container remove 8067e45b92dee0156ff0e89ad88289fa63fe1c7a012617d0734f529e7582de61 (image=quay.io/ceph/ceph:v20, name=keen_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:16:04 np0005545273 systemd[1]: libpod-conmon-8067e45b92dee0156ff0e89ad88289fa63fe1c7a012617d0734f529e7582de61.scope: Deactivated successfully.
Dec  4 05:16:05 np0005545273 python3[91728]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:16:05 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 21 pg[3.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [1] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:05 np0005545273 podman[91729]: 2025-12-04 10:16:05.265048814 +0000 UTC m=+0.047120012 container create 101ab5c78b0af544a9f78e9c5ee0ad4b138eb29b53225d70311b2bb1cc253253 (image=quay.io/ceph/ceph:v20, name=quirky_fermat, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:16:05 np0005545273 systemd[1]: Started libpod-conmon-101ab5c78b0af544a9f78e9c5ee0ad4b138eb29b53225d70311b2bb1cc253253.scope.
Dec  4 05:16:05 np0005545273 podman[91729]: 2025-12-04 10:16:05.245700641 +0000 UTC m=+0.027771869 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:16:05 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:05 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c58ad66bfb5cd08fbceb40e5befa7ed534111070455a92948c025533606deac/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:05 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c58ad66bfb5cd08fbceb40e5befa7ed534111070455a92948c025533606deac/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:05 np0005545273 podman[91729]: 2025-12-04 10:16:05.373694346 +0000 UTC m=+0.155765614 container init 101ab5c78b0af544a9f78e9c5ee0ad4b138eb29b53225d70311b2bb1cc253253 (image=quay.io/ceph/ceph:v20, name=quirky_fermat, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec  4 05:16:05 np0005545273 podman[91729]: 2025-12-04 10:16:05.381960107 +0000 UTC m=+0.164031295 container start 101ab5c78b0af544a9f78e9c5ee0ad4b138eb29b53225d70311b2bb1cc253253 (image=quay.io/ceph/ceph:v20, name=quirky_fermat, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:16:05 np0005545273 podman[91729]: 2025-12-04 10:16:05.385491364 +0000 UTC m=+0.167562572 container attach 101ab5c78b0af544a9f78e9c5ee0ad4b138eb29b53225d70311b2bb1cc253253 (image=quay.io/ceph/ceph:v20, name=quirky_fermat, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:16:05 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  4 05:16:05 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1281447236' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec  4 05:16:05 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Dec  4 05:16:05 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1281447236' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  4 05:16:05 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Dec  4 05:16:05 np0005545273 quirky_fermat[91744]: pool 'backups' created
Dec  4 05:16:05 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Dec  4 05:16:05 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 22 pg[3.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [1] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:05 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/1471047535' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  4 05:16:05 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/1281447236' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec  4 05:16:05 np0005545273 systemd[1]: libpod-101ab5c78b0af544a9f78e9c5ee0ad4b138eb29b53225d70311b2bb1cc253253.scope: Deactivated successfully.
Dec  4 05:16:05 np0005545273 podman[91729]: 2025-12-04 10:16:05.873747532 +0000 UTC m=+0.655818720 container died 101ab5c78b0af544a9f78e9c5ee0ad4b138eb29b53225d70311b2bb1cc253253 (image=quay.io/ceph/ceph:v20, name=quirky_fermat, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:16:05 np0005545273 systemd[1]: var-lib-containers-storage-overlay-4c58ad66bfb5cd08fbceb40e5befa7ed534111070455a92948c025533606deac-merged.mount: Deactivated successfully.
Dec  4 05:16:05 np0005545273 podman[91729]: 2025-12-04 10:16:05.912662673 +0000 UTC m=+0.694733851 container remove 101ab5c78b0af544a9f78e9c5ee0ad4b138eb29b53225d70311b2bb1cc253253 (image=quay.io/ceph/ceph:v20, name=quirky_fermat, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  4 05:16:05 np0005545273 systemd[1]: libpod-conmon-101ab5c78b0af544a9f78e9c5ee0ad4b138eb29b53225d70311b2bb1cc253253.scope: Deactivated successfully.
Dec  4 05:16:06 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 22 pg[4.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:06 np0005545273 python3[91809]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:16:06 np0005545273 podman[91810]: 2025-12-04 10:16:06.287151273 +0000 UTC m=+0.043875451 container create 1f8d1955927d48ac394dde7dad9d3416e29a9120d37b4d3f695285b42c71afae (image=quay.io/ceph/ceph:v20, name=confident_edison, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  4 05:16:06 np0005545273 systemd[1]: Started libpod-conmon-1f8d1955927d48ac394dde7dad9d3416e29a9120d37b4d3f695285b42c71afae.scope.
Dec  4 05:16:06 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:06 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3b8e4a2f9cf72d1be36aab6b98f5adf37dfc41690255df1f1a73195e966cdbd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:06 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3b8e4a2f9cf72d1be36aab6b98f5adf37dfc41690255df1f1a73195e966cdbd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:06 np0005545273 podman[91810]: 2025-12-04 10:16:06.358968626 +0000 UTC m=+0.115692824 container init 1f8d1955927d48ac394dde7dad9d3416e29a9120d37b4d3f695285b42c71afae (image=quay.io/ceph/ceph:v20, name=confident_edison, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:16:06 np0005545273 podman[91810]: 2025-12-04 10:16:06.26898037 +0000 UTC m=+0.025704568 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:16:06 np0005545273 podman[91810]: 2025-12-04 10:16:06.366227623 +0000 UTC m=+0.122951801 container start 1f8d1955927d48ac394dde7dad9d3416e29a9120d37b4d3f695285b42c71afae (image=quay.io/ceph/ceph:v20, name=confident_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:16:06 np0005545273 podman[91810]: 2025-12-04 10:16:06.370056157 +0000 UTC m=+0.126780335 container attach 1f8d1955927d48ac394dde7dad9d3416e29a9120d37b4d3f695285b42c71afae (image=quay.io/ceph/ceph:v20, name=confident_edison, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  4 05:16:06 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v63: 4 pgs: 2 unknown, 2 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:16:06 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  4 05:16:06 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3592461387' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec  4 05:16:06 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Dec  4 05:16:06 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3592461387' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  4 05:16:06 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Dec  4 05:16:06 np0005545273 confident_edison[91826]: pool 'images' created
Dec  4 05:16:06 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Dec  4 05:16:06 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 23 pg[5.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [2] r=0 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:06 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 23 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:06 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/1281447236' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  4 05:16:06 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/3592461387' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec  4 05:16:06 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/3592461387' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  4 05:16:06 np0005545273 systemd[1]: libpod-1f8d1955927d48ac394dde7dad9d3416e29a9120d37b4d3f695285b42c71afae.scope: Deactivated successfully.
Dec  4 05:16:06 np0005545273 podman[91810]: 2025-12-04 10:16:06.87817783 +0000 UTC m=+0.634902018 container died 1f8d1955927d48ac394dde7dad9d3416e29a9120d37b4d3f695285b42c71afae (image=quay.io/ceph/ceph:v20, name=confident_edison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:16:06 np0005545273 systemd[1]: var-lib-containers-storage-overlay-b3b8e4a2f9cf72d1be36aab6b98f5adf37dfc41690255df1f1a73195e966cdbd-merged.mount: Deactivated successfully.
Dec  4 05:16:06 np0005545273 podman[91810]: 2025-12-04 10:16:06.921083318 +0000 UTC m=+0.677807496 container remove 1f8d1955927d48ac394dde7dad9d3416e29a9120d37b4d3f695285b42c71afae (image=quay.io/ceph/ceph:v20, name=confident_edison, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3)
Dec  4 05:16:06 np0005545273 systemd[1]: libpod-conmon-1f8d1955927d48ac394dde7dad9d3416e29a9120d37b4d3f695285b42c71afae.scope: Deactivated successfully.
Dec  4 05:16:07 np0005545273 python3[91890]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:16:07 np0005545273 podman[91891]: 2025-12-04 10:16:07.358201127 +0000 UTC m=+0.074565910 container create 9a338773e1e656ed7c6f3603647c6afa9fad075746eb4741fdf56d14f70bbc2a (image=quay.io/ceph/ceph:v20, name=amazing_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Dec  4 05:16:07 np0005545273 systemd[1]: Started libpod-conmon-9a338773e1e656ed7c6f3603647c6afa9fad075746eb4741fdf56d14f70bbc2a.scope.
Dec  4 05:16:07 np0005545273 podman[91891]: 2025-12-04 10:16:07.329792494 +0000 UTC m=+0.046157357 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:16:07 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:07 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b5455e618b80291eb466ce001b4f9a04fdd0967ab602cd8c4c3e70979df3c7c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:07 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b5455e618b80291eb466ce001b4f9a04fdd0967ab602cd8c4c3e70979df3c7c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:07 np0005545273 podman[91891]: 2025-12-04 10:16:07.458883615 +0000 UTC m=+0.175248408 container init 9a338773e1e656ed7c6f3603647c6afa9fad075746eb4741fdf56d14f70bbc2a (image=quay.io/ceph/ceph:v20, name=amazing_shamir, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:16:07 np0005545273 podman[91891]: 2025-12-04 10:16:07.466965743 +0000 UTC m=+0.183330526 container start 9a338773e1e656ed7c6f3603647c6afa9fad075746eb4741fdf56d14f70bbc2a (image=quay.io/ceph/ceph:v20, name=amazing_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec  4 05:16:07 np0005545273 podman[91891]: 2025-12-04 10:16:07.471423721 +0000 UTC m=+0.187788514 container attach 9a338773e1e656ed7c6f3603647c6afa9fad075746eb4741fdf56d14f70bbc2a (image=quay.io/ceph/ceph:v20, name=amazing_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  4 05:16:07 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Dec  4 05:16:07 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Dec  4 05:16:07 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Dec  4 05:16:07 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 24 pg[5.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [2] r=0 lpr=23 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:07 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  4 05:16:07 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4151799274' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec  4 05:16:08 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e24 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:16:08 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v66: 5 pgs: 1 unknown, 4 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:16:08 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Dec  4 05:16:08 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/4151799274' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec  4 05:16:08 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4151799274' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  4 05:16:08 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Dec  4 05:16:08 np0005545273 amazing_shamir[91906]: pool 'cephfs.cephfs.meta' created
Dec  4 05:16:08 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Dec  4 05:16:08 np0005545273 systemd[1]: libpod-9a338773e1e656ed7c6f3603647c6afa9fad075746eb4741fdf56d14f70bbc2a.scope: Deactivated successfully.
Dec  4 05:16:08 np0005545273 podman[91891]: 2025-12-04 10:16:08.924814769 +0000 UTC m=+1.641179652 container died 9a338773e1e656ed7c6f3603647c6afa9fad075746eb4741fdf56d14f70bbc2a (image=quay.io/ceph/ceph:v20, name=amazing_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:16:08 np0005545273 systemd[1]: var-lib-containers-storage-overlay-2b5455e618b80291eb466ce001b4f9a04fdd0967ab602cd8c4c3e70979df3c7c-merged.mount: Deactivated successfully.
Dec  4 05:16:08 np0005545273 podman[91891]: 2025-12-04 10:16:08.97895319 +0000 UTC m=+1.695318003 container remove 9a338773e1e656ed7c6f3603647c6afa9fad075746eb4741fdf56d14f70bbc2a (image=quay.io/ceph/ceph:v20, name=amazing_shamir, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:16:09 np0005545273 systemd[1]: libpod-conmon-9a338773e1e656ed7c6f3603647c6afa9fad075746eb4741fdf56d14f70bbc2a.scope: Deactivated successfully.
Dec  4 05:16:09 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 25 pg[6.0( empty local-lis/les=0/0 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [0] r=0 lpr=25 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:09 np0005545273 python3[91969]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:16:09 np0005545273 podman[91970]: 2025-12-04 10:16:09.39434832 +0000 UTC m=+0.079021990 container create 1ff58a9717a466d74a68392bdc0ab06b2adf938d9dc6b3cca997009c7e07ab7d (image=quay.io/ceph/ceph:v20, name=dreamy_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec  4 05:16:09 np0005545273 systemd[1]: Started libpod-conmon-1ff58a9717a466d74a68392bdc0ab06b2adf938d9dc6b3cca997009c7e07ab7d.scope.
Dec  4 05:16:09 np0005545273 podman[91970]: 2025-12-04 10:16:09.343877128 +0000 UTC m=+0.028550898 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:16:09 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:09 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e61d03706de4c5b994d95b6dacd7ac54ad6422715cbd9d67c216f52a12632f3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:09 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e61d03706de4c5b994d95b6dacd7ac54ad6422715cbd9d67c216f52a12632f3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:09 np0005545273 podman[91970]: 2025-12-04 10:16:09.487371461 +0000 UTC m=+0.172045141 container init 1ff58a9717a466d74a68392bdc0ab06b2adf938d9dc6b3cca997009c7e07ab7d (image=quay.io/ceph/ceph:v20, name=dreamy_hofstadter, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec  4 05:16:09 np0005545273 podman[91970]: 2025-12-04 10:16:09.496084524 +0000 UTC m=+0.180758214 container start 1ff58a9717a466d74a68392bdc0ab06b2adf938d9dc6b3cca997009c7e07ab7d (image=quay.io/ceph/ceph:v20, name=dreamy_hofstadter, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  4 05:16:09 np0005545273 podman[91970]: 2025-12-04 10:16:09.500058511 +0000 UTC m=+0.184732211 container attach 1ff58a9717a466d74a68392bdc0ab06b2adf938d9dc6b3cca997009c7e07ab7d (image=quay.io/ceph/ceph:v20, name=dreamy_hofstadter, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec  4 05:16:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Dec  4 05:16:09 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/4151799274' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  4 05:16:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Dec  4 05:16:09 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Dec  4 05:16:09 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 26 pg[6.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [0] r=0 lpr=25 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  4 05:16:09 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/523878764' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec  4 05:16:10 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v69: 6 pgs: 2 unknown, 4 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:16:10 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Dec  4 05:16:10 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/523878764' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  4 05:16:10 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Dec  4 05:16:10 np0005545273 dreamy_hofstadter[91985]: pool 'cephfs.cephfs.data' created
Dec  4 05:16:10 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/523878764' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec  4 05:16:10 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Dec  4 05:16:10 np0005545273 systemd[1]: libpod-1ff58a9717a466d74a68392bdc0ab06b2adf938d9dc6b3cca997009c7e07ab7d.scope: Deactivated successfully.
Dec  4 05:16:10 np0005545273 podman[91970]: 2025-12-04 10:16:10.961976137 +0000 UTC m=+1.646649857 container died 1ff58a9717a466d74a68392bdc0ab06b2adf938d9dc6b3cca997009c7e07ab7d (image=quay.io/ceph/ceph:v20, name=dreamy_hofstadter, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:16:11 np0005545273 systemd[1]: var-lib-containers-storage-overlay-3e61d03706de4c5b994d95b6dacd7ac54ad6422715cbd9d67c216f52a12632f3-merged.mount: Deactivated successfully.
Dec  4 05:16:11 np0005545273 podman[91970]: 2025-12-04 10:16:11.022645567 +0000 UTC m=+1.707319247 container remove 1ff58a9717a466d74a68392bdc0ab06b2adf938d9dc6b3cca997009c7e07ab7d (image=quay.io/ceph/ceph:v20, name=dreamy_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  4 05:16:11 np0005545273 systemd[1]: libpod-conmon-1ff58a9717a466d74a68392bdc0ab06b2adf938d9dc6b3cca997009c7e07ab7d.scope: Deactivated successfully.
Dec  4 05:16:11 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 27 pg[7.0( empty local-lis/les=0/0 n=0 ec=27/27 lis/c=0/0 les/c/f=0/0/0 sis=27) [1] r=0 lpr=27 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:11 np0005545273 python3[92048]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:16:11 np0005545273 podman[92049]: 2025-12-04 10:16:11.544704131 +0000 UTC m=+0.088408219 container create 55ec83f480b6127b2bab718362d94efd3e28f0aa5b1ba6fe532aab678a1302dc (image=quay.io/ceph/ceph:v20, name=sweet_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:16:11 np0005545273 systemd[1]: Started libpod-conmon-55ec83f480b6127b2bab718362d94efd3e28f0aa5b1ba6fe532aab678a1302dc.scope.
Dec  4 05:16:11 np0005545273 podman[92049]: 2025-12-04 10:16:11.507826971 +0000 UTC m=+0.051530909 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:16:11 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:11 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d61474271913f5670396aabfd68589b1bb56dbaca61704e5d1ebbaea318cdcba/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:11 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d61474271913f5670396aabfd68589b1bb56dbaca61704e5d1ebbaea318cdcba/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:11 np0005545273 podman[92049]: 2025-12-04 10:16:11.657432253 +0000 UTC m=+0.201136211 container init 55ec83f480b6127b2bab718362d94efd3e28f0aa5b1ba6fe532aab678a1302dc (image=quay.io/ceph/ceph:v20, name=sweet_liskov, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:16:11 np0005545273 podman[92049]: 2025-12-04 10:16:11.667678843 +0000 UTC m=+0.211382711 container start 55ec83f480b6127b2bab718362d94efd3e28f0aa5b1ba6fe532aab678a1302dc (image=quay.io/ceph/ceph:v20, name=sweet_liskov, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  4 05:16:11 np0005545273 podman[92049]: 2025-12-04 10:16:11.672359047 +0000 UTC m=+0.216063015 container attach 55ec83f480b6127b2bab718362d94efd3e28f0aa5b1ba6fe532aab678a1302dc (image=quay.io/ceph/ceph:v20, name=sweet_liskov, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:16:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Dec  4 05:16:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Dec  4 05:16:11 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Dec  4 05:16:11 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/523878764' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  4 05:16:11 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 28 pg[7.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=0/0 les/c/f=0/0/0 sis=27) [1] r=0 lpr=27 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:12 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Dec  4 05:16:12 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2201750263' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Dec  4 05:16:12 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:16:12 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Dec  4 05:16:12 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2201750263' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec  4 05:16:12 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Dec  4 05:16:12 np0005545273 sweet_liskov[92065]: enabled application 'rbd' on pool 'vms'
Dec  4 05:16:12 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Dec  4 05:16:12 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/2201750263' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Dec  4 05:16:12 np0005545273 systemd[1]: libpod-55ec83f480b6127b2bab718362d94efd3e28f0aa5b1ba6fe532aab678a1302dc.scope: Deactivated successfully.
Dec  4 05:16:12 np0005545273 podman[92049]: 2025-12-04 10:16:12.986437603 +0000 UTC m=+1.530141501 container died 55ec83f480b6127b2bab718362d94efd3e28f0aa5b1ba6fe532aab678a1302dc (image=quay.io/ceph/ceph:v20, name=sweet_liskov, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:16:13 np0005545273 systemd[1]: var-lib-containers-storage-overlay-d61474271913f5670396aabfd68589b1bb56dbaca61704e5d1ebbaea318cdcba-merged.mount: Deactivated successfully.
Dec  4 05:16:13 np0005545273 podman[92049]: 2025-12-04 10:16:13.038502014 +0000 UTC m=+1.582205902 container remove 55ec83f480b6127b2bab718362d94efd3e28f0aa5b1ba6fe532aab678a1302dc (image=quay.io/ceph/ceph:v20, name=sweet_liskov, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec  4 05:16:13 np0005545273 systemd[1]: libpod-conmon-55ec83f480b6127b2bab718362d94efd3e28f0aa5b1ba6fe532aab678a1302dc.scope: Deactivated successfully.
Dec  4 05:16:13 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:16:13 np0005545273 python3[92128]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:16:13 np0005545273 podman[92129]: 2025-12-04 10:16:13.451380532 +0000 UTC m=+0.056138691 container create 42d73a01a2b9008e2aef72f6e4effb9debbed7d9327a8d0a63bbbcc3911ae84e (image=quay.io/ceph/ceph:v20, name=jolly_bell, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  4 05:16:13 np0005545273 systemd[1]: Started libpod-conmon-42d73a01a2b9008e2aef72f6e4effb9debbed7d9327a8d0a63bbbcc3911ae84e.scope.
Dec  4 05:16:13 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:13 np0005545273 podman[92129]: 2025-12-04 10:16:13.427087249 +0000 UTC m=+0.031845428 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:16:13 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff2781a935b2fb38e1cd213626a7625e12ffeb1fcaee2921c6acc23cb64a1667/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:13 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff2781a935b2fb38e1cd213626a7625e12ffeb1fcaee2921c6acc23cb64a1667/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:13 np0005545273 podman[92129]: 2025-12-04 10:16:13.544851364 +0000 UTC m=+0.149609553 container init 42d73a01a2b9008e2aef72f6e4effb9debbed7d9327a8d0a63bbbcc3911ae84e (image=quay.io/ceph/ceph:v20, name=jolly_bell, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Dec  4 05:16:13 np0005545273 podman[92129]: 2025-12-04 10:16:13.555194596 +0000 UTC m=+0.159952735 container start 42d73a01a2b9008e2aef72f6e4effb9debbed7d9327a8d0a63bbbcc3911ae84e (image=quay.io/ceph/ceph:v20, name=jolly_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec  4 05:16:13 np0005545273 podman[92129]: 2025-12-04 10:16:13.559946602 +0000 UTC m=+0.164704781 container attach 42d73a01a2b9008e2aef72f6e4effb9debbed7d9327a8d0a63bbbcc3911ae84e (image=quay.io/ceph/ceph:v20, name=jolly_bell, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:16:13 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/2201750263' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec  4 05:16:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Dec  4 05:16:14 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1071403904' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Dec  4 05:16:14 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:16:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Dec  4 05:16:14 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/1071403904' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Dec  4 05:16:14 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1071403904' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec  4 05:16:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Dec  4 05:16:14 np0005545273 jolly_bell[92144]: enabled application 'rbd' on pool 'volumes'
Dec  4 05:16:14 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Dec  4 05:16:15 np0005545273 systemd[1]: libpod-42d73a01a2b9008e2aef72f6e4effb9debbed7d9327a8d0a63bbbcc3911ae84e.scope: Deactivated successfully.
Dec  4 05:16:15 np0005545273 podman[92129]: 2025-12-04 10:16:15.013021741 +0000 UTC m=+1.617779910 container died 42d73a01a2b9008e2aef72f6e4effb9debbed7d9327a8d0a63bbbcc3911ae84e (image=quay.io/ceph/ceph:v20, name=jolly_bell, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:16:15 np0005545273 systemd[1]: var-lib-containers-storage-overlay-ff2781a935b2fb38e1cd213626a7625e12ffeb1fcaee2921c6acc23cb64a1667-merged.mount: Deactivated successfully.
Dec  4 05:16:15 np0005545273 podman[92129]: 2025-12-04 10:16:15.066944217 +0000 UTC m=+1.671702346 container remove 42d73a01a2b9008e2aef72f6e4effb9debbed7d9327a8d0a63bbbcc3911ae84e (image=quay.io/ceph/ceph:v20, name=jolly_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:16:15 np0005545273 systemd[1]: libpod-conmon-42d73a01a2b9008e2aef72f6e4effb9debbed7d9327a8d0a63bbbcc3911ae84e.scope: Deactivated successfully.
Dec  4 05:16:15 np0005545273 python3[92206]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:16:15 np0005545273 podman[92207]: 2025-12-04 10:16:15.471914643 +0000 UTC m=+0.076642692 container create 39cdf955324fbde5085d449dbda4d06abd4d1540b00a9f8ca2a0ff3ceda0fdc2 (image=quay.io/ceph/ceph:v20, name=unruffled_mahavira, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Dec  4 05:16:15 np0005545273 systemd[1]: Started libpod-conmon-39cdf955324fbde5085d449dbda4d06abd4d1540b00a9f8ca2a0ff3ceda0fdc2.scope.
Dec  4 05:16:15 np0005545273 podman[92207]: 2025-12-04 10:16:15.439258866 +0000 UTC m=+0.043986995 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:16:15 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:15 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1f071d89ae7910af1f71d9d22c94b6aa870db603872c41038f6277659d9009e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:15 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1f071d89ae7910af1f71d9d22c94b6aa870db603872c41038f6277659d9009e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:15 np0005545273 podman[92207]: 2025-12-04 10:16:15.584616504 +0000 UTC m=+0.189344593 container init 39cdf955324fbde5085d449dbda4d06abd4d1540b00a9f8ca2a0ff3ceda0fdc2 (image=quay.io/ceph/ceph:v20, name=unruffled_mahavira, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec  4 05:16:15 np0005545273 podman[92207]: 2025-12-04 10:16:15.59552869 +0000 UTC m=+0.200256749 container start 39cdf955324fbde5085d449dbda4d06abd4d1540b00a9f8ca2a0ff3ceda0fdc2 (image=quay.io/ceph/ceph:v20, name=unruffled_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  4 05:16:15 np0005545273 podman[92207]: 2025-12-04 10:16:15.598979365 +0000 UTC m=+0.203707434 container attach 39cdf955324fbde5085d449dbda4d06abd4d1540b00a9f8ca2a0ff3ceda0fdc2 (image=quay.io/ceph/ceph:v20, name=unruffled_mahavira, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:16:15 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/1071403904' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec  4 05:16:16 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Dec  4 05:16:16 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/733069007' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Dec  4 05:16:16 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:16:16 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Dec  4 05:16:16 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/733069007' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Dec  4 05:16:17 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/733069007' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec  4 05:16:17 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Dec  4 05:16:17 np0005545273 unruffled_mahavira[92222]: enabled application 'rbd' on pool 'backups'
Dec  4 05:16:17 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Dec  4 05:16:17 np0005545273 systemd[1]: libpod-39cdf955324fbde5085d449dbda4d06abd4d1540b00a9f8ca2a0ff3ceda0fdc2.scope: Deactivated successfully.
Dec  4 05:16:17 np0005545273 podman[92207]: 2025-12-04 10:16:17.03348636 +0000 UTC m=+1.638214409 container died 39cdf955324fbde5085d449dbda4d06abd4d1540b00a9f8ca2a0ff3ceda0fdc2 (image=quay.io/ceph/ceph:v20, name=unruffled_mahavira, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True)
Dec  4 05:16:17 np0005545273 systemd[1]: var-lib-containers-storage-overlay-f1f071d89ae7910af1f71d9d22c94b6aa870db603872c41038f6277659d9009e-merged.mount: Deactivated successfully.
Dec  4 05:16:17 np0005545273 podman[92207]: 2025-12-04 10:16:17.079847162 +0000 UTC m=+1.684575191 container remove 39cdf955324fbde5085d449dbda4d06abd4d1540b00a9f8ca2a0ff3ceda0fdc2 (image=quay.io/ceph/ceph:v20, name=unruffled_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:16:17 np0005545273 systemd[1]: libpod-conmon-39cdf955324fbde5085d449dbda4d06abd4d1540b00a9f8ca2a0ff3ceda0fdc2.scope: Deactivated successfully.
Dec  4 05:16:17 np0005545273 python3[92285]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:16:17 np0005545273 podman[92286]: 2025-12-04 10:16:17.502310894 +0000 UTC m=+0.059833011 container create a0c8510ea6a5bf472cfb505a47f7d8c875f43e1e0d1e6ec811481e7eae622310 (image=quay.io/ceph/ceph:v20, name=loving_gauss, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:16:17 np0005545273 systemd[1]: Started libpod-conmon-a0c8510ea6a5bf472cfb505a47f7d8c875f43e1e0d1e6ec811481e7eae622310.scope.
Dec  4 05:16:17 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:17 np0005545273 podman[92286]: 2025-12-04 10:16:17.480478371 +0000 UTC m=+0.038000468 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:16:17 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00e82db5a4de706137f620a149e96ac68cbe0645b052fd5fac5f905460061adc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:17 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00e82db5a4de706137f620a149e96ac68cbe0645b052fd5fac5f905460061adc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:17 np0005545273 podman[92286]: 2025-12-04 10:16:17.593347217 +0000 UTC m=+0.150869314 container init a0c8510ea6a5bf472cfb505a47f7d8c875f43e1e0d1e6ec811481e7eae622310 (image=quay.io/ceph/ceph:v20, name=loving_gauss, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec  4 05:16:17 np0005545273 podman[92286]: 2025-12-04 10:16:17.605638497 +0000 UTC m=+0.163160574 container start a0c8510ea6a5bf472cfb505a47f7d8c875f43e1e0d1e6ec811481e7eae622310 (image=quay.io/ceph/ceph:v20, name=loving_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  4 05:16:17 np0005545273 podman[92286]: 2025-12-04 10:16:17.60908556 +0000 UTC m=+0.166607637 container attach a0c8510ea6a5bf472cfb505a47f7d8c875f43e1e0d1e6ec811481e7eae622310 (image=quay.io/ceph/ceph:v20, name=loving_gauss, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:16:18 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/733069007' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec  4 05:16:18 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Dec  4 05:16:18 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1490703649' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Dec  4 05:16:18 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:16:18 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:16:19 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Dec  4 05:16:19 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/1490703649' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Dec  4 05:16:19 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1490703649' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec  4 05:16:19 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Dec  4 05:16:19 np0005545273 loving_gauss[92301]: enabled application 'rbd' on pool 'images'
Dec  4 05:16:19 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Dec  4 05:16:19 np0005545273 systemd[1]: libpod-a0c8510ea6a5bf472cfb505a47f7d8c875f43e1e0d1e6ec811481e7eae622310.scope: Deactivated successfully.
Dec  4 05:16:19 np0005545273 podman[92286]: 2025-12-04 10:16:19.054447983 +0000 UTC m=+1.611970060 container died a0c8510ea6a5bf472cfb505a47f7d8c875f43e1e0d1e6ec811481e7eae622310 (image=quay.io/ceph/ceph:v20, name=loving_gauss, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Dec  4 05:16:19 np0005545273 systemd[1]: var-lib-containers-storage-overlay-00e82db5a4de706137f620a149e96ac68cbe0645b052fd5fac5f905460061adc-merged.mount: Deactivated successfully.
Dec  4 05:16:19 np0005545273 podman[92286]: 2025-12-04 10:16:19.117205484 +0000 UTC m=+1.674727601 container remove a0c8510ea6a5bf472cfb505a47f7d8c875f43e1e0d1e6ec811481e7eae622310 (image=quay.io/ceph/ceph:v20, name=loving_gauss, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:16:19 np0005545273 systemd[1]: libpod-conmon-a0c8510ea6a5bf472cfb505a47f7d8c875f43e1e0d1e6ec811481e7eae622310.scope: Deactivated successfully.
Dec  4 05:16:19 np0005545273 python3[92363]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:16:19 np0005545273 podman[92364]: 2025-12-04 10:16:19.540268961 +0000 UTC m=+0.067609032 container create 3fda3d6b8791b3e9165cfeeb3c46b147f90227a84ae1b09d264eab243678623d (image=quay.io/ceph/ceph:v20, name=eager_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Dec  4 05:16:19 np0005545273 systemd[1]: Started libpod-conmon-3fda3d6b8791b3e9165cfeeb3c46b147f90227a84ae1b09d264eab243678623d.scope.
Dec  4 05:16:19 np0005545273 podman[92364]: 2025-12-04 10:16:19.510010782 +0000 UTC m=+0.037350953 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:16:19 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:19 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dd91fea5cf1193072753c8edb535a7700f68d04804dc86ee08edfadc4546eec/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:19 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dd91fea5cf1193072753c8edb535a7700f68d04804dc86ee08edfadc4546eec/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:19 np0005545273 podman[92364]: 2025-12-04 10:16:19.629278803 +0000 UTC m=+0.156618894 container init 3fda3d6b8791b3e9165cfeeb3c46b147f90227a84ae1b09d264eab243678623d (image=quay.io/ceph/ceph:v20, name=eager_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:16:19 np0005545273 podman[92364]: 2025-12-04 10:16:19.638624572 +0000 UTC m=+0.165964683 container start 3fda3d6b8791b3e9165cfeeb3c46b147f90227a84ae1b09d264eab243678623d (image=quay.io/ceph/ceph:v20, name=eager_hugle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:16:19 np0005545273 podman[92364]: 2025-12-04 10:16:19.644437084 +0000 UTC m=+0.171777185 container attach 3fda3d6b8791b3e9165cfeeb3c46b147f90227a84ae1b09d264eab243678623d (image=quay.io/ceph/ceph:v20, name=eager_hugle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:16:20 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/1490703649' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec  4 05:16:20 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Dec  4 05:16:20 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2254392936' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Dec  4 05:16:20 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v80: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:16:21 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Dec  4 05:16:21 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/2254392936' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Dec  4 05:16:21 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2254392936' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec  4 05:16:21 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Dec  4 05:16:21 np0005545273 eager_hugle[92379]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Dec  4 05:16:21 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Dec  4 05:16:21 np0005545273 systemd[1]: libpod-3fda3d6b8791b3e9165cfeeb3c46b147f90227a84ae1b09d264eab243678623d.scope: Deactivated successfully.
Dec  4 05:16:21 np0005545273 podman[92404]: 2025-12-04 10:16:21.115424032 +0000 UTC m=+0.039909413 container died 3fda3d6b8791b3e9165cfeeb3c46b147f90227a84ae1b09d264eab243678623d (image=quay.io/ceph/ceph:v20, name=eager_hugle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  4 05:16:21 np0005545273 systemd[1]: var-lib-containers-storage-overlay-3dd91fea5cf1193072753c8edb535a7700f68d04804dc86ee08edfadc4546eec-merged.mount: Deactivated successfully.
Dec  4 05:16:21 np0005545273 podman[92404]: 2025-12-04 10:16:21.159238258 +0000 UTC m=+0.083723649 container remove 3fda3d6b8791b3e9165cfeeb3c46b147f90227a84ae1b09d264eab243678623d (image=quay.io/ceph/ceph:v20, name=eager_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:16:21 np0005545273 systemd[1]: libpod-conmon-3fda3d6b8791b3e9165cfeeb3c46b147f90227a84ae1b09d264eab243678623d.scope: Deactivated successfully.
Dec  4 05:16:21 np0005545273 python3[92444]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:16:21 np0005545273 podman[92445]: 2025-12-04 10:16:21.616388743 +0000 UTC m=+0.067248717 container create d5724ffbea1923d339a57121f960407a5f6b29c19bccc2201a159126f56468d1 (image=quay.io/ceph/ceph:v20, name=amazing_napier, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  4 05:16:21 np0005545273 systemd[1]: Started libpod-conmon-d5724ffbea1923d339a57121f960407a5f6b29c19bccc2201a159126f56468d1.scope.
Dec  4 05:16:21 np0005545273 podman[92445]: 2025-12-04 10:16:21.589411797 +0000 UTC m=+0.040271801 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:16:21 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:21 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c31eb3cd9cf2adca95fb929fdff3b13621196a8e6e919471e655e24911be379d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:21 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c31eb3cd9cf2adca95fb929fdff3b13621196a8e6e919471e655e24911be379d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:21 np0005545273 podman[92445]: 2025-12-04 10:16:21.71894738 +0000 UTC m=+0.169807354 container init d5724ffbea1923d339a57121f960407a5f6b29c19bccc2201a159126f56468d1 (image=quay.io/ceph/ceph:v20, name=amazing_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec  4 05:16:21 np0005545273 podman[92445]: 2025-12-04 10:16:21.730148933 +0000 UTC m=+0.181008937 container start d5724ffbea1923d339a57121f960407a5f6b29c19bccc2201a159126f56468d1 (image=quay.io/ceph/ceph:v20, name=amazing_napier, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:16:21 np0005545273 podman[92445]: 2025-12-04 10:16:21.736149279 +0000 UTC m=+0.187009293 container attach d5724ffbea1923d339a57121f960407a5f6b29c19bccc2201a159126f56468d1 (image=quay.io/ceph/ceph:v20, name=amazing_napier, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:16:22 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/2254392936' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec  4 05:16:22 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Dec  4 05:16:22 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/409955285' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Dec  4 05:16:22 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v82: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:16:23 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Dec  4 05:16:23 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/409955285' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Dec  4 05:16:23 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/409955285' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec  4 05:16:23 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Dec  4 05:16:23 np0005545273 amazing_napier[92460]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Dec  4 05:16:23 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Dec  4 05:16:23 np0005545273 systemd[1]: libpod-d5724ffbea1923d339a57121f960407a5f6b29c19bccc2201a159126f56468d1.scope: Deactivated successfully.
Dec  4 05:16:23 np0005545273 podman[92445]: 2025-12-04 10:16:23.103948118 +0000 UTC m=+1.554808092 container died d5724ffbea1923d339a57121f960407a5f6b29c19bccc2201a159126f56468d1 (image=quay.io/ceph/ceph:v20, name=amazing_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Dec  4 05:16:23 np0005545273 systemd[1]: var-lib-containers-storage-overlay-c31eb3cd9cf2adca95fb929fdff3b13621196a8e6e919471e655e24911be379d-merged.mount: Deactivated successfully.
Dec  4 05:16:23 np0005545273 podman[92445]: 2025-12-04 10:16:23.156695091 +0000 UTC m=+1.607555065 container remove d5724ffbea1923d339a57121f960407a5f6b29c19bccc2201a159126f56468d1 (image=quay.io/ceph/ceph:v20, name=amazing_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec  4 05:16:23 np0005545273 systemd[1]: libpod-conmon-d5724ffbea1923d339a57121f960407a5f6b29c19bccc2201a159126f56468d1.scope: Deactivated successfully.
Dec  4 05:16:23 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:16:24 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/409955285' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec  4 05:16:24 np0005545273 python3[92572]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 05:16:24 np0005545273 python3[92643]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764843383.9314525-36514-156799691508851/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:16:24 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v84: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:16:25 np0005545273 python3[92745]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 05:16:25 np0005545273 python3[92820]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764843385.0338879-36528-207500483054033/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=4d95922f97b49ea28e47c382de2b5d80693dc831 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:16:26 np0005545273 python3[92870]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:16:26 np0005545273 podman[92871]: 2025-12-04 10:16:26.458356129 +0000 UTC m=+0.054813986 container create 8856b79ab7d545b3be7d758898a826fc277082d92c893f89f380ec0a04185ac1 (image=quay.io/ceph/ceph:v20, name=wizardly_jemison, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:16:26 np0005545273 systemd[1]: Started libpod-conmon-8856b79ab7d545b3be7d758898a826fc277082d92c893f89f380ec0a04185ac1.scope.
Dec  4 05:16:26 np0005545273 podman[92871]: 2025-12-04 10:16:26.430812138 +0000 UTC m=+0.027270015 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:16:26 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:26 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/542c7aed9cf46369471aabb53315199e839e4182aa2f4cef9d9e3f17b7d334da/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:26 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/542c7aed9cf46369471aabb53315199e839e4182aa2f4cef9d9e3f17b7d334da/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:26 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/542c7aed9cf46369471aabb53315199e839e4182aa2f4cef9d9e3f17b7d334da/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:26 np0005545273 podman[92871]: 2025-12-04 10:16:26.56854704 +0000 UTC m=+0.165004987 container init 8856b79ab7d545b3be7d758898a826fc277082d92c893f89f380ec0a04185ac1 (image=quay.io/ceph/ceph:v20, name=wizardly_jemison, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:16:26 np0005545273 podman[92871]: 2025-12-04 10:16:26.575679293 +0000 UTC m=+0.172137190 container start 8856b79ab7d545b3be7d758898a826fc277082d92c893f89f380ec0a04185ac1 (image=quay.io/ceph/ceph:v20, name=wizardly_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:16:26 np0005545273 podman[92871]: 2025-12-04 10:16:26.581021594 +0000 UTC m=+0.177479491 container attach 8856b79ab7d545b3be7d758898a826fc277082d92c893f89f380ec0a04185ac1 (image=quay.io/ceph/ceph:v20, name=wizardly_jemison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  4 05:16:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:16:26
Dec  4 05:16:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:16:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:16:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['.mgr', 'images', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'backups']
Dec  4 05:16:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:16:26 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v85: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:16:26 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec  4 05:16:26 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2721298245' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec  4 05:16:26 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2721298245' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  4 05:16:26 np0005545273 wizardly_jemison[92886]: 
Dec  4 05:16:26 np0005545273 wizardly_jemison[92886]: [global]
Dec  4 05:16:26 np0005545273 wizardly_jemison[92886]: #011fsid = f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d
Dec  4 05:16:26 np0005545273 wizardly_jemison[92886]: #011mon_host = 192.168.122.100
Dec  4 05:16:26 np0005545273 wizardly_jemison[92886]: #011rgw_keystone_api_version = 3
Dec  4 05:16:27 np0005545273 systemd[1]: libpod-8856b79ab7d545b3be7d758898a826fc277082d92c893f89f380ec0a04185ac1.scope: Deactivated successfully.
Dec  4 05:16:27 np0005545273 podman[92871]: 2025-12-04 10:16:27.024834435 +0000 UTC m=+0.621292322 container died 8856b79ab7d545b3be7d758898a826fc277082d92c893f89f380ec0a04185ac1 (image=quay.io/ceph/ceph:v20, name=wizardly_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:16:27 np0005545273 systemd[1]: var-lib-containers-storage-overlay-542c7aed9cf46369471aabb53315199e839e4182aa2f4cef9d9e3f17b7d334da-merged.mount: Deactivated successfully.
Dec  4 05:16:27 np0005545273 podman[92871]: 2025-12-04 10:16:27.077930317 +0000 UTC m=+0.674388214 container remove 8856b79ab7d545b3be7d758898a826fc277082d92c893f89f380ec0a04185ac1 (image=quay.io/ceph/ceph:v20, name=wizardly_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Dec  4 05:16:27 np0005545273 systemd[1]: libpod-conmon-8856b79ab7d545b3be7d758898a826fc277082d92c893f89f380ec0a04185ac1.scope: Deactivated successfully.
Dec  4 05:16:27 np0005545273 python3[92999]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:16:27 np0005545273 podman[93014]: 2025-12-04 10:16:27.569348838 +0000 UTC m=+0.084014776 container create ca561d74804843eca09f6dac380ad0c3443872a973a49b6874d4ad25b9d6336f (image=quay.io/ceph/ceph:v20, name=eloquent_easley, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:16:27 np0005545273 systemd[1]: Started libpod-conmon-ca561d74804843eca09f6dac380ad0c3443872a973a49b6874d4ad25b9d6336f.scope.
Dec  4 05:16:27 np0005545273 podman[93014]: 2025-12-04 10:16:27.532662095 +0000 UTC m=+0.047328143 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:16:27 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:27 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b0094151e313e4c2f7133d258085f953e99d1d1a781051d2f76309ae100c7ce/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:27 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b0094151e313e4c2f7133d258085f953e99d1d1a781051d2f76309ae100c7ce/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:27 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b0094151e313e4c2f7133d258085f953e99d1d1a781051d2f76309ae100c7ce/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:27 np0005545273 podman[93014]: 2025-12-04 10:16:27.670423707 +0000 UTC m=+0.185089685 container init ca561d74804843eca09f6dac380ad0c3443872a973a49b6874d4ad25b9d6336f (image=quay.io/ceph/ceph:v20, name=eloquent_easley, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:16:27 np0005545273 podman[93014]: 2025-12-04 10:16:27.679498819 +0000 UTC m=+0.194164757 container start ca561d74804843eca09f6dac380ad0c3443872a973a49b6874d4ad25b9d6336f (image=quay.io/ceph/ceph:v20, name=eloquent_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:16:27 np0005545273 podman[93014]: 2025-12-04 10:16:27.683184738 +0000 UTC m=+0.197850866 container attach ca561d74804843eca09f6dac380ad0c3443872a973a49b6874d4ad25b9d6336f (image=quay.io/ceph/ceph:v20, name=eloquent_easley, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:16:27 np0005545273 podman[93062]: 2025-12-04 10:16:27.739887738 +0000 UTC m=+0.076255337 container exec 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:16:27 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/2721298245' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec  4 05:16:27 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/2721298245' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  4 05:16:27 np0005545273 podman[93062]: 2025-12-04 10:16:27.844826242 +0000 UTC m=+0.181193871 container exec_died 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:16:27 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:16:27 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:16:27 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:16:27 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:16:27 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  4 05:16:27 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:16:27 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  4 05:16:27 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:16:27 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  4 05:16:27 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:16:27 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  4 05:16:27 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:16:27 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  4 05:16:27 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:16:27 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  4 05:16:27 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Dec  4 05:16:27 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Dec  4 05:16:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:16:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:16:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:16:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:16:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:16:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:16:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:16:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:16:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:16:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:16:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:16:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:16:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:16:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:16:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:16:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:16:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Dec  4 05:16:28 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2436301133' entity='client.admin' 
Dec  4 05:16:28 np0005545273 eloquent_easley[93055]: set ssl_option
Dec  4 05:16:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:16:28 np0005545273 systemd[1]: libpod-ca561d74804843eca09f6dac380ad0c3443872a973a49b6874d4ad25b9d6336f.scope: Deactivated successfully.
Dec  4 05:16:28 np0005545273 podman[93181]: 2025-12-04 10:16:28.318222564 +0000 UTC m=+0.031660412 container died ca561d74804843eca09f6dac380ad0c3443872a973a49b6874d4ad25b9d6336f (image=quay.io/ceph/ceph:v20, name=eloquent_easley, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  4 05:16:28 np0005545273 systemd[1]: var-lib-containers-storage-overlay-0b0094151e313e4c2f7133d258085f953e99d1d1a781051d2f76309ae100c7ce-merged.mount: Deactivated successfully.
Dec  4 05:16:28 np0005545273 podman[93181]: 2025-12-04 10:16:28.364552751 +0000 UTC m=+0.077990509 container remove ca561d74804843eca09f6dac380ad0c3443872a973a49b6874d4ad25b9d6336f (image=quay.io/ceph/ceph:v20, name=eloquent_easley, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  4 05:16:28 np0005545273 systemd[1]: libpod-conmon-ca561d74804843eca09f6dac380ad0c3443872a973a49b6874d4ad25b9d6336f.scope: Deactivated successfully.
Dec  4 05:16:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:16:28 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:16:28 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:16:28 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:16:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:16:28 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:16:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:16:28 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:16:28 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:16:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:16:28 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:16:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:16:28 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:16:28 np0005545273 python3[93269]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:16:28 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v86: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:16:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Dec  4 05:16:28 np0005545273 podman[93281]: 2025-12-04 10:16:28.760346445 +0000 UTC m=+0.061170020 container create 4778f336b2683a4f4a8f9f3402002f41a2048727b153c1a96ec7ac7e271c1833 (image=quay.io/ceph/ceph:v20, name=vigilant_northcutt, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  4 05:16:28 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Dec  4 05:16:28 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/2436301133' entity='client.admin' 
Dec  4 05:16:28 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:28 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:28 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:16:28 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:28 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:16:28 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec  4 05:16:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Dec  4 05:16:28 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Dec  4 05:16:28 np0005545273 ceph-mgr[75651]: [progress INFO root] update: starting ev a3b094dd-b703-45b1-a600-dd5543626180 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec  4 05:16:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Dec  4 05:16:28 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Dec  4 05:16:28 np0005545273 systemd[1]: Started libpod-conmon-4778f336b2683a4f4a8f9f3402002f41a2048727b153c1a96ec7ac7e271c1833.scope.
Dec  4 05:16:28 np0005545273 podman[93281]: 2025-12-04 10:16:28.740713477 +0000 UTC m=+0.041537072 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:16:28 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:28 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22fabf50b8e7da1965e960f508bc5535edbb6d3f5bcdac3f00b8326c9e2788f1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:28 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22fabf50b8e7da1965e960f508bc5535edbb6d3f5bcdac3f00b8326c9e2788f1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:28 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22fabf50b8e7da1965e960f508bc5535edbb6d3f5bcdac3f00b8326c9e2788f1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:28 np0005545273 podman[93281]: 2025-12-04 10:16:28.898568598 +0000 UTC m=+0.199392173 container init 4778f336b2683a4f4a8f9f3402002f41a2048727b153c1a96ec7ac7e271c1833 (image=quay.io/ceph/ceph:v20, name=vigilant_northcutt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:16:28 np0005545273 podman[93281]: 2025-12-04 10:16:28.906472971 +0000 UTC m=+0.207296546 container start 4778f336b2683a4f4a8f9f3402002f41a2048727b153c1a96ec7ac7e271c1833 (image=quay.io/ceph/ceph:v20, name=vigilant_northcutt, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Dec  4 05:16:28 np0005545273 podman[93281]: 2025-12-04 10:16:28.914866275 +0000 UTC m=+0.215689870 container attach 4778f336b2683a4f4a8f9f3402002f41a2048727b153c1a96ec7ac7e271c1833 (image=quay.io/ceph/ceph:v20, name=vigilant_northcutt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:16:29 np0005545273 podman[93375]: 2025-12-04 10:16:29.141255525 +0000 UTC m=+0.059865628 container create 041d0833b6926fab11b1891cd89f4fc5bd2d5a1c8e6ca2cc886d8d725c11056a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_euclid, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  4 05:16:29 np0005545273 systemd[1]: Started libpod-conmon-041d0833b6926fab11b1891cd89f4fc5bd2d5a1c8e6ca2cc886d8d725c11056a.scope.
Dec  4 05:16:29 np0005545273 podman[93375]: 2025-12-04 10:16:29.112890614 +0000 UTC m=+0.031500757 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:16:29 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:29 np0005545273 podman[93375]: 2025-12-04 10:16:29.239189789 +0000 UTC m=+0.157799942 container init 041d0833b6926fab11b1891cd89f4fc5bd2d5a1c8e6ca2cc886d8d725c11056a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  4 05:16:29 np0005545273 podman[93375]: 2025-12-04 10:16:29.250721889 +0000 UTC m=+0.169332022 container start 041d0833b6926fab11b1891cd89f4fc5bd2d5a1c8e6ca2cc886d8d725c11056a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:16:29 np0005545273 podman[93375]: 2025-12-04 10:16:29.255123876 +0000 UTC m=+0.173734019 container attach 041d0833b6926fab11b1891cd89f4fc5bd2d5a1c8e6ca2cc886d8d725c11056a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_euclid, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec  4 05:16:29 np0005545273 vibrant_euclid[93391]: 167 167
Dec  4 05:16:29 np0005545273 systemd[1]: libpod-041d0833b6926fab11b1891cd89f4fc5bd2d5a1c8e6ca2cc886d8d725c11056a.scope: Deactivated successfully.
Dec  4 05:16:29 np0005545273 conmon[93391]: conmon 041d0833b6926fab11b1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-041d0833b6926fab11b1891cd89f4fc5bd2d5a1c8e6ca2cc886d8d725c11056a.scope/container/memory.events
Dec  4 05:16:29 np0005545273 podman[93375]: 2025-12-04 10:16:29.265974791 +0000 UTC m=+0.184584904 container died 041d0833b6926fab11b1891cd89f4fc5bd2d5a1c8e6ca2cc886d8d725c11056a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_euclid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:16:29 np0005545273 systemd[1]: var-lib-containers-storage-overlay-790bf1ebcae194e50bc492c33b67666964808443711286f93b0d7adb7ec93f55-merged.mount: Deactivated successfully.
Dec  4 05:16:29 np0005545273 podman[93375]: 2025-12-04 10:16:29.308943316 +0000 UTC m=+0.227553419 container remove 041d0833b6926fab11b1891cd89f4fc5bd2d5a1c8e6ca2cc886d8d725c11056a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_euclid, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:16:29 np0005545273 systemd[1]: libpod-conmon-041d0833b6926fab11b1891cd89f4fc5bd2d5a1c8e6ca2cc886d8d725c11056a.scope: Deactivated successfully.
Dec  4 05:16:29 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14236 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 05:16:29 np0005545273 ceph-mgr[75651]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Dec  4 05:16:29 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Dec  4 05:16:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec  4 05:16:29 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:29 np0005545273 vigilant_northcutt[93339]: Scheduled rgw.rgw update...
Dec  4 05:16:29 np0005545273 systemd[1]: libpod-4778f336b2683a4f4a8f9f3402002f41a2048727b153c1a96ec7ac7e271c1833.scope: Deactivated successfully.
Dec  4 05:16:29 np0005545273 podman[93281]: 2025-12-04 10:16:29.375147348 +0000 UTC m=+0.675970923 container died 4778f336b2683a4f4a8f9f3402002f41a2048727b153c1a96ec7ac7e271c1833 (image=quay.io/ceph/ceph:v20, name=vigilant_northcutt, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Dec  4 05:16:29 np0005545273 systemd[1]: var-lib-containers-storage-overlay-22fabf50b8e7da1965e960f508bc5535edbb6d3f5bcdac3f00b8326c9e2788f1-merged.mount: Deactivated successfully.
Dec  4 05:16:29 np0005545273 podman[93281]: 2025-12-04 10:16:29.413013199 +0000 UTC m=+0.713836774 container remove 4778f336b2683a4f4a8f9f3402002f41a2048727b153c1a96ec7ac7e271c1833 (image=quay.io/ceph/ceph:v20, name=vigilant_northcutt, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:16:29 np0005545273 systemd[1]: libpod-conmon-4778f336b2683a4f4a8f9f3402002f41a2048727b153c1a96ec7ac7e271c1833.scope: Deactivated successfully.
Dec  4 05:16:29 np0005545273 podman[93429]: 2025-12-04 10:16:29.488129367 +0000 UTC m=+0.047557969 container create 918c321c713127fe3d80c8c0738c76fbc71ba078ba76d1d7515fed8358b8c1e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_northcutt, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec  4 05:16:29 np0005545273 systemd[1]: Started libpod-conmon-918c321c713127fe3d80c8c0738c76fbc71ba078ba76d1d7515fed8358b8c1e9.scope.
Dec  4 05:16:29 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:29 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c65ab3d089749ff22c223ad40ca26298ccdc0d343b03f9b3b8de5465e84e911/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:29 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c65ab3d089749ff22c223ad40ca26298ccdc0d343b03f9b3b8de5465e84e911/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:29 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c65ab3d089749ff22c223ad40ca26298ccdc0d343b03f9b3b8de5465e84e911/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:29 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c65ab3d089749ff22c223ad40ca26298ccdc0d343b03f9b3b8de5465e84e911/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:29 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c65ab3d089749ff22c223ad40ca26298ccdc0d343b03f9b3b8de5465e84e911/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:29 np0005545273 podman[93429]: 2025-12-04 10:16:29.467826523 +0000 UTC m=+0.027255135 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:16:29 np0005545273 podman[93429]: 2025-12-04 10:16:29.565544532 +0000 UTC m=+0.124973144 container init 918c321c713127fe3d80c8c0738c76fbc71ba078ba76d1d7515fed8358b8c1e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:16:29 np0005545273 podman[93429]: 2025-12-04 10:16:29.582198097 +0000 UTC m=+0.141626689 container start 918c321c713127fe3d80c8c0738c76fbc71ba078ba76d1d7515fed8358b8c1e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_northcutt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  4 05:16:29 np0005545273 podman[93429]: 2025-12-04 10:16:29.586301067 +0000 UTC m=+0.145729719 container attach 918c321c713127fe3d80c8c0738c76fbc71ba078ba76d1d7515fed8358b8c1e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:16:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Dec  4 05:16:29 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec  4 05:16:29 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Dec  4 05:16:29 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:29 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec  4 05:16:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Dec  4 05:16:29 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Dec  4 05:16:29 np0005545273 ceph-mgr[75651]: [progress INFO root] update: starting ev f0b8ae94-a712-4e37-a160-babe7e42db15 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec  4 05:16:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Dec  4 05:16:29 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Dec  4 05:16:30 np0005545273 infallible_northcutt[93445]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:16:30 np0005545273 infallible_northcutt[93445]: --> All data devices are unavailable
Dec  4 05:16:30 np0005545273 systemd[1]: libpod-918c321c713127fe3d80c8c0738c76fbc71ba078ba76d1d7515fed8358b8c1e9.scope: Deactivated successfully.
Dec  4 05:16:30 np0005545273 podman[93429]: 2025-12-04 10:16:30.140618797 +0000 UTC m=+0.700047389 container died 918c321c713127fe3d80c8c0738c76fbc71ba078ba76d1d7515fed8358b8c1e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:16:30 np0005545273 systemd[1]: var-lib-containers-storage-overlay-7c65ab3d089749ff22c223ad40ca26298ccdc0d343b03f9b3b8de5465e84e911-merged.mount: Deactivated successfully.
Dec  4 05:16:30 np0005545273 podman[93429]: 2025-12-04 10:16:30.195625716 +0000 UTC m=+0.755054308 container remove 918c321c713127fe3d80c8c0738c76fbc71ba078ba76d1d7515fed8358b8c1e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_northcutt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec  4 05:16:30 np0005545273 systemd[1]: libpod-conmon-918c321c713127fe3d80c8c0738c76fbc71ba078ba76d1d7515fed8358b8c1e9.scope: Deactivated successfully.
Dec  4 05:16:30 np0005545273 python3[93602]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 05:16:30 np0005545273 podman[93636]: 2025-12-04 10:16:30.627288852 +0000 UTC m=+0.040638480 container create 2b74c0d55a610070e6ef64f42fa6ff4fc568c7081f535a9d5aa4c86e50fc9eed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec  4 05:16:30 np0005545273 systemd[1]: Started libpod-conmon-2b74c0d55a610070e6ef64f42fa6ff4fc568c7081f535a9d5aa4c86e50fc9eed.scope.
Dec  4 05:16:30 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:30 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v89: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:16:30 np0005545273 podman[93636]: 2025-12-04 10:16:30.694712643 +0000 UTC m=+0.108062311 container init 2b74c0d55a610070e6ef64f42fa6ff4fc568c7081f535a9d5aa4c86e50fc9eed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  4 05:16:30 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Dec  4 05:16:30 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Dec  4 05:16:30 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Dec  4 05:16:30 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Dec  4 05:16:30 np0005545273 podman[93636]: 2025-12-04 10:16:30.700866713 +0000 UTC m=+0.114216351 container start 2b74c0d55a610070e6ef64f42fa6ff4fc568c7081f535a9d5aa4c86e50fc9eed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:16:30 np0005545273 podman[93636]: 2025-12-04 10:16:30.609262943 +0000 UTC m=+0.022612611 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:16:30 np0005545273 podman[93636]: 2025-12-04 10:16:30.704850049 +0000 UTC m=+0.118199687 container attach 2b74c0d55a610070e6ef64f42fa6ff4fc568c7081f535a9d5aa4c86e50fc9eed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  4 05:16:30 np0005545273 serene_almeida[93678]: 167 167
Dec  4 05:16:30 np0005545273 systemd[1]: libpod-2b74c0d55a610070e6ef64f42fa6ff4fc568c7081f535a9d5aa4c86e50fc9eed.scope: Deactivated successfully.
Dec  4 05:16:30 np0005545273 podman[93636]: 2025-12-04 10:16:30.708335705 +0000 UTC m=+0.121685363 container died 2b74c0d55a610070e6ef64f42fa6ff4fc568c7081f535a9d5aa4c86e50fc9eed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_almeida, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:16:30 np0005545273 systemd[1]: var-lib-containers-storage-overlay-46f71bf5488be3813c144391e2cc903972e1dbed847fde97856b9b9330aad92e-merged.mount: Deactivated successfully.
Dec  4 05:16:30 np0005545273 podman[93636]: 2025-12-04 10:16:30.753425342 +0000 UTC m=+0.166774970 container remove 2b74c0d55a610070e6ef64f42fa6ff4fc568c7081f535a9d5aa4c86e50fc9eed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_almeida, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:16:30 np0005545273 systemd[1]: libpod-conmon-2b74c0d55a610070e6ef64f42fa6ff4fc568c7081f535a9d5aa4c86e50fc9eed.scope: Deactivated successfully.
Dec  4 05:16:30 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Dec  4 05:16:30 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec  4 05:16:30 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec  4 05:16:30 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec  4 05:16:30 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Dec  4 05:16:30 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Dec  4 05:16:30 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 37 pg[2.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37 pruub=14.048410416s) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active pruub 67.081153870s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:30 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 37 pg[2.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37 pruub=14.048410416s) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown pruub 67.081153870s@ mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:30 np0005545273 ceph-mon[75358]: Saving service rgw.rgw spec with placement compute-0
Dec  4 05:16:30 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec  4 05:16:30 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Dec  4 05:16:30 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Dec  4 05:16:30 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Dec  4 05:16:30 np0005545273 ceph-mgr[75651]: [progress INFO root] update: starting ev fe7e3f41-2f49-4445-9440-8b10495b4a6a (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec  4 05:16:30 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Dec  4 05:16:30 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Dec  4 05:16:30 np0005545273 python3[93703]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764843390.2445598-36569-115274787865741/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:16:30 np0005545273 podman[93725]: 2025-12-04 10:16:30.918907389 +0000 UTC m=+0.038981510 container create bc9e55c0a8b213d414ba9e6e79f5591902fd86d4c3e1129b120ea7cc315c903d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:16:30 np0005545273 systemd[1]: Started libpod-conmon-bc9e55c0a8b213d414ba9e6e79f5591902fd86d4c3e1129b120ea7cc315c903d.scope.
Dec  4 05:16:30 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:30 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dec29a41c70c47236eed0fe2c2621bf260237bdd5a4907996c5ed7bd28df2b4f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:30 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dec29a41c70c47236eed0fe2c2621bf260237bdd5a4907996c5ed7bd28df2b4f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:30 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dec29a41c70c47236eed0fe2c2621bf260237bdd5a4907996c5ed7bd28df2b4f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:30 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dec29a41c70c47236eed0fe2c2621bf260237bdd5a4907996c5ed7bd28df2b4f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:30 np0005545273 podman[93725]: 2025-12-04 10:16:30.900555343 +0000 UTC m=+0.020629484 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:16:31 np0005545273 podman[93725]: 2025-12-04 10:16:31.006460461 +0000 UTC m=+0.126534582 container init bc9e55c0a8b213d414ba9e6e79f5591902fd86d4c3e1129b120ea7cc315c903d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_satoshi, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:16:31 np0005545273 podman[93725]: 2025-12-04 10:16:31.015824799 +0000 UTC m=+0.135898920 container start bc9e55c0a8b213d414ba9e6e79f5591902fd86d4c3e1129b120ea7cc315c903d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_satoshi, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  4 05:16:31 np0005545273 podman[93725]: 2025-12-04 10:16:31.019970249 +0000 UTC m=+0.140044370 container attach bc9e55c0a8b213d414ba9e6e79f5591902fd86d4c3e1129b120ea7cc315c903d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_satoshi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]: {
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:    "0": [
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:        {
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            "devices": [
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "/dev/loop3"
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            ],
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            "lv_name": "ceph_lv0",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            "lv_size": "21470642176",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            "name": "ceph_lv0",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            "tags": {
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.cluster_name": "ceph",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.crush_device_class": "",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.encrypted": "0",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.objectstore": "bluestore",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.osd_id": "0",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.type": "block",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.vdo": "0",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.with_tpm": "0"
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            },
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            "type": "block",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            "vg_name": "ceph_vg0"
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:        }
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:    ],
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:    "1": [
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:        {
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            "devices": [
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "/dev/loop4"
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            ],
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            "lv_name": "ceph_lv1",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            "lv_size": "21470642176",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            "name": "ceph_lv1",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            "tags": {
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.cluster_name": "ceph",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.crush_device_class": "",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.encrypted": "0",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.objectstore": "bluestore",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.osd_id": "1",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.type": "block",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.vdo": "0",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.with_tpm": "0"
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            },
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            "type": "block",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            "vg_name": "ceph_vg1"
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:        }
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:    ],
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:    "2": [
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:        {
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            "devices": [
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "/dev/loop5"
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            ],
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            "lv_name": "ceph_lv2",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            "lv_size": "21470642176",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            "name": "ceph_lv2",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            "tags": {
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.cluster_name": "ceph",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.crush_device_class": "",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.encrypted": "0",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.objectstore": "bluestore",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.osd_id": "2",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.type": "block",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.vdo": "0",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:                "ceph.with_tpm": "0"
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            },
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            "type": "block",
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:            "vg_name": "ceph_vg2"
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:        }
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]:    ]
Dec  4 05:16:31 np0005545273 competent_satoshi[93766]: }
Dec  4 05:16:31 np0005545273 systemd[1]: libpod-bc9e55c0a8b213d414ba9e6e79f5591902fd86d4c3e1129b120ea7cc315c903d.scope: Deactivated successfully.
Dec  4 05:16:31 np0005545273 podman[93725]: 2025-12-04 10:16:31.326654094 +0000 UTC m=+0.446728215 container died bc9e55c0a8b213d414ba9e6e79f5591902fd86d4c3e1129b120ea7cc315c903d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:16:31 np0005545273 python3[93798]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:16:31 np0005545273 systemd[1]: var-lib-containers-storage-overlay-dec29a41c70c47236eed0fe2c2621bf260237bdd5a4907996c5ed7bd28df2b4f-merged.mount: Deactivated successfully.
Dec  4 05:16:31 np0005545273 podman[93725]: 2025-12-04 10:16:31.37624825 +0000 UTC m=+0.496322371 container remove bc9e55c0a8b213d414ba9e6e79f5591902fd86d4c3e1129b120ea7cc315c903d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_satoshi, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:16:31 np0005545273 systemd[1]: libpod-conmon-bc9e55c0a8b213d414ba9e6e79f5591902fd86d4c3e1129b120ea7cc315c903d.scope: Deactivated successfully.
Dec  4 05:16:31 np0005545273 podman[93814]: 2025-12-04 10:16:31.425086529 +0000 UTC m=+0.051479554 container create ad73e3b6693e79a2c4951aba62a2ad84200d57678ceb70a7900b66d4544c5989 (image=quay.io/ceph/ceph:v20, name=clever_colden, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  4 05:16:31 np0005545273 systemd[1]: Started libpod-conmon-ad73e3b6693e79a2c4951aba62a2ad84200d57678ceb70a7900b66d4544c5989.scope.
Dec  4 05:16:31 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:31 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/516d408fbde46f6578d5ff0d3acfcb3eb14a40bfebd6de4b2bf4b8de50ff1771/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:31 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/516d408fbde46f6578d5ff0d3acfcb3eb14a40bfebd6de4b2bf4b8de50ff1771/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:31 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/516d408fbde46f6578d5ff0d3acfcb3eb14a40bfebd6de4b2bf4b8de50ff1771/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:31 np0005545273 podman[93814]: 2025-12-04 10:16:31.490748997 +0000 UTC m=+0.117142042 container init ad73e3b6693e79a2c4951aba62a2ad84200d57678ceb70a7900b66d4544c5989 (image=quay.io/ceph/ceph:v20, name=clever_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Dec  4 05:16:31 np0005545273 podman[93814]: 2025-12-04 10:16:31.498359182 +0000 UTC m=+0.124752207 container start ad73e3b6693e79a2c4951aba62a2ad84200d57678ceb70a7900b66d4544c5989 (image=quay.io/ceph/ceph:v20, name=clever_colden, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec  4 05:16:31 np0005545273 podman[93814]: 2025-12-04 10:16:31.405617755 +0000 UTC m=+0.032010810 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:16:31 np0005545273 podman[93814]: 2025-12-04 10:16:31.501540469 +0000 UTC m=+0.127933494 container attach ad73e3b6693e79a2c4951aba62a2ad84200d57678ceb70a7900b66d4544c5989 (image=quay.io/ceph/ceph:v20, name=clever_colden, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec  4 05:16:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Dec  4 05:16:31 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec  4 05:16:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Dec  4 05:16:31 np0005545273 podman[93913]: 2025-12-04 10:16:31.82042305 +0000 UTC m=+0.042925435 container create 0fa2803e5e4b391af9771d49a52e5289b91830276c66719872227799d416d68a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Dec  4 05:16:31 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Dec  4 05:16:31 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec  4 05:16:31 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec  4 05:16:31 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec  4 05:16:31 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Dec  4 05:16:31 np0005545273 ceph-mgr[75651]: [progress INFO root] update: starting ev f9b75358-ee27-4a3c-ac3f-817e92e49fbe (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.1e( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.1d( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.1f( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.1c( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.b( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.a( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.9( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.8( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.5( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.6( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.3( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.2( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.1( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.7( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.c( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.4( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.d( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.e( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.f( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.10( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.11( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.12( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.13( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.14( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.15( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.16( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.17( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.18( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.19( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.1a( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.1b( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.1e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 37 pg[3.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=37 pruub=14.020574570s) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active pruub 76.672187805s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=37 pruub=14.020574570s) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown pruub 76.672187805s@ mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.1c( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.2( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.1b( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.1e( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.1d( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.1d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.1f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.1f( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.6( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.3( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.0( empty local-lis/les=37/38 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.3( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.16( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.15( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.4( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.5( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.7( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.17( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.19( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.6( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 38 pg[2.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.9( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.8( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.b( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.a( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.d( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.c( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.f( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.e( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.11( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.10( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.13( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.12( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.14( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.16( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.15( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.17( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.1a( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.18( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.19( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[3.1( empty local-lis/les=21/22 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0)
Dec  4 05:16:31 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} : dispatch
Dec  4 05:16:31 np0005545273 systemd[1]: Started libpod-conmon-0fa2803e5e4b391af9771d49a52e5289b91830276c66719872227799d416d68a.scope.
Dec  4 05:16:31 np0005545273 podman[93913]: 2025-12-04 10:16:31.798717333 +0000 UTC m=+0.021219748 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:16:31 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:31 np0005545273 podman[93913]: 2025-12-04 10:16:31.911886927 +0000 UTC m=+0.134389362 container init 0fa2803e5e4b391af9771d49a52e5289b91830276c66719872227799d416d68a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_cerf, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  4 05:16:31 np0005545273 podman[93913]: 2025-12-04 10:16:31.91941073 +0000 UTC m=+0.141913115 container start 0fa2803e5e4b391af9771d49a52e5289b91830276c66719872227799d416d68a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  4 05:16:31 np0005545273 pedantic_cerf[93929]: 167 167
Dec  4 05:16:31 np0005545273 systemd[1]: libpod-0fa2803e5e4b391af9771d49a52e5289b91830276c66719872227799d416d68a.scope: Deactivated successfully.
Dec  4 05:16:31 np0005545273 podman[93913]: 2025-12-04 10:16:31.923644953 +0000 UTC m=+0.146147358 container attach 0fa2803e5e4b391af9771d49a52e5289b91830276c66719872227799d416d68a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  4 05:16:31 np0005545273 podman[93913]: 2025-12-04 10:16:31.925256422 +0000 UTC m=+0.147758827 container died 0fa2803e5e4b391af9771d49a52e5289b91830276c66719872227799d416d68a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_cerf, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  4 05:16:31 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14238 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 05:16:31 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec  4 05:16:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Dec  4 05:16:31 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Dec  4 05:16:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Dec  4 05:16:31 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Dec  4 05:16:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Dec  4 05:16:31 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Dec  4 05:16:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Dec  4 05:16:31 np0005545273 ceph-mon[75358]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec  4 05:16:31 np0005545273 ceph-mon[75358]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec  4 05:16:31 np0005545273 systemd[1]: var-lib-containers-storage-overlay-29c6a01d7ed7db11ab5549fa07e77339c1a916232d690cdc620789ee804acc08-merged.mount: Deactivated successfully.
Dec  4 05:16:31 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0[75354]: 2025-12-04T10:16:31.946+0000 7f6c157b8640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec  4 05:16:31 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec  4 05:16:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).mds e2 new map
Dec  4 05:16:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).mds e2 print_map#012e2#012btime 2025-12-04T10:16:31:947702+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-04T10:16:31.947313+0000#012modified#0112025-12-04T10:16:31.947313+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 
Dec  4 05:16:31 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Dec  4 05:16:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Dec  4 05:16:31 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Dec  4 05:16:31 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Dec  4 05:16:31 np0005545273 ceph-mgr[75651]: [progress INFO root] update: starting ev 0d70b3bf-3b35-43f6-8448-42122400e8e7 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Dec  4 05:16:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Dec  4 05:16:31 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.1c( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.1f( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-mgr[75651]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Dec  4 05:16:31 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Dec  4 05:16:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.a( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.1e( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.9( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.6( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.7( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.8( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.5( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.1b( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.3( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.1( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.2( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.4( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.0( empty local-lis/les=37/39 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.d( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.e( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.f( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.b( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.c( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.10( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.11( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.13( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.14( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.12( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.15( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.16( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.17( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.19( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.1a( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.1d( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 39 pg[3.18( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [1] r=0 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:31 np0005545273 podman[93913]: 2025-12-04 10:16:31.973208309 +0000 UTC m=+0.195710694 container remove 0fa2803e5e4b391af9771d49a52e5289b91830276c66719872227799d416d68a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:16:31 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:31 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec  4 05:16:31 np0005545273 systemd[1]: libpod-conmon-0fa2803e5e4b391af9771d49a52e5289b91830276c66719872227799d416d68a.scope: Deactivated successfully.
Dec  4 05:16:31 np0005545273 systemd[1]: libpod-ad73e3b6693e79a2c4951aba62a2ad84200d57678ceb70a7900b66d4544c5989.scope: Deactivated successfully.
Dec  4 05:16:31 np0005545273 podman[93814]: 2025-12-04 10:16:31.997589983 +0000 UTC m=+0.623982998 container died ad73e3b6693e79a2c4951aba62a2ad84200d57678ceb70a7900b66d4544c5989 (image=quay.io/ceph/ceph:v20, name=clever_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec  4 05:16:32 np0005545273 systemd[1]: var-lib-containers-storage-overlay-516d408fbde46f6578d5ff0d3acfcb3eb14a40bfebd6de4b2bf4b8de50ff1771-merged.mount: Deactivated successfully.
Dec  4 05:16:32 np0005545273 podman[93814]: 2025-12-04 10:16:32.033907317 +0000 UTC m=+0.660300342 container remove ad73e3b6693e79a2c4951aba62a2ad84200d57678ceb70a7900b66d4544c5989 (image=quay.io/ceph/ceph:v20, name=clever_colden, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:16:32 np0005545273 systemd[1]: libpod-conmon-ad73e3b6693e79a2c4951aba62a2ad84200d57678ceb70a7900b66d4544c5989.scope: Deactivated successfully.
Dec  4 05:16:32 np0005545273 podman[93967]: 2025-12-04 10:16:32.14377065 +0000 UTC m=+0.043514870 container create c32bc1987925a5affdcad97e0ecb989ec58792ffadf8f6df4e2b40a095f23d83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_snyder, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec  4 05:16:32 np0005545273 systemd[1]: Started libpod-conmon-c32bc1987925a5affdcad97e0ecb989ec58792ffadf8f6df4e2b40a095f23d83.scope.
Dec  4 05:16:32 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:32 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5a3d58ee7337ac5a194ac58981d6791e503228a88e5c562880300e562430f74/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:32 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5a3d58ee7337ac5a194ac58981d6791e503228a88e5c562880300e562430f74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:32 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5a3d58ee7337ac5a194ac58981d6791e503228a88e5c562880300e562430f74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:32 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5a3d58ee7337ac5a194ac58981d6791e503228a88e5c562880300e562430f74/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:32 np0005545273 podman[93967]: 2025-12-04 10:16:32.12568775 +0000 UTC m=+0.025431990 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:16:32 np0005545273 podman[93967]: 2025-12-04 10:16:32.228363229 +0000 UTC m=+0.128107479 container init c32bc1987925a5affdcad97e0ecb989ec58792ffadf8f6df4e2b40a095f23d83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_snyder, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:16:32 np0005545273 podman[93967]: 2025-12-04 10:16:32.235642846 +0000 UTC m=+0.135387086 container start c32bc1987925a5affdcad97e0ecb989ec58792ffadf8f6df4e2b40a095f23d83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_snyder, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:16:32 np0005545273 podman[93967]: 2025-12-04 10:16:32.240244198 +0000 UTC m=+0.139988418 container attach c32bc1987925a5affdcad97e0ecb989ec58792ffadf8f6df4e2b40a095f23d83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Dec  4 05:16:32 np0005545273 python3[94013]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:16:32 np0005545273 podman[94015]: 2025-12-04 10:16:32.487339813 +0000 UTC m=+0.066406568 container create de14e8ba4d15b1e2a65465b55bf8179c34dd40c6e208a6a89a2eebadb3bcbaea (image=quay.io/ceph/ceph:v20, name=beautiful_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:16:32 np0005545273 systemd[1]: Started libpod-conmon-de14e8ba4d15b1e2a65465b55bf8179c34dd40c6e208a6a89a2eebadb3bcbaea.scope.
Dec  4 05:16:32 np0005545273 podman[94015]: 2025-12-04 10:16:32.459819032 +0000 UTC m=+0.038885827 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:16:32 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:32 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b60109bb18cd21373232024db857353c9a82aa1e323b7fea219aceb290c7cfb4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:32 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b60109bb18cd21373232024db857353c9a82aa1e323b7fea219aceb290c7cfb4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:32 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b60109bb18cd21373232024db857353c9a82aa1e323b7fea219aceb290c7cfb4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:32 np0005545273 podman[94015]: 2025-12-04 10:16:32.561668581 +0000 UTC m=+0.140735376 container init de14e8ba4d15b1e2a65465b55bf8179c34dd40c6e208a6a89a2eebadb3bcbaea (image=quay.io/ceph/ceph:v20, name=beautiful_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:16:32 np0005545273 podman[94015]: 2025-12-04 10:16:32.567659557 +0000 UTC m=+0.146726322 container start de14e8ba4d15b1e2a65465b55bf8179c34dd40c6e208a6a89a2eebadb3bcbaea (image=quay.io/ceph/ceph:v20, name=beautiful_clarke, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:16:32 np0005545273 podman[94015]: 2025-12-04 10:16:32.573056878 +0000 UTC m=+0.152123653 container attach de14e8ba4d15b1e2a65465b55bf8179c34dd40c6e208a6a89a2eebadb3bcbaea (image=quay.io/ceph/ceph:v20, name=beautiful_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:16:32 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v93: 69 pgs: 1 peering, 31 unknown, 37 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:16:32 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Dec  4 05:16:32 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Dec  4 05:16:32 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0)
Dec  4 05:16:32 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Dec  4 05:16:32 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Dec  4 05:16:32 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Dec  4 05:16:32 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec  4 05:16:32 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} : dispatch
Dec  4 05:16:32 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Dec  4 05:16:32 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Dec  4 05:16:32 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Dec  4 05:16:32 np0005545273 ceph-mon[75358]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec  4 05:16:32 np0005545273 ceph-mon[75358]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec  4 05:16:32 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec  4 05:16:32 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Dec  4 05:16:32 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Dec  4 05:16:32 np0005545273 ceph-mon[75358]: Saving service mds.cephfs spec with placement compute-0
Dec  4 05:16:32 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:32 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Dec  4 05:16:32 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Dec  4 05:16:32 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Dec  4 05:16:32 np0005545273 ceph-mgr[75651]: [progress WARNING root] Starting Global Recovery Event,32 pgs not in active + clean state
Dec  4 05:16:32 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Dec  4 05:16:32 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec  4 05:16:32 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec  4 05:16:32 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec  4 05:16:32 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec  4 05:16:32 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Dec  4 05:16:32 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Dec  4 05:16:32 np0005545273 ceph-mgr[75651]: [progress INFO root] update: starting ev a9acb27e-f811-43ae-b16b-6b6b4373fc73 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec  4 05:16:32 np0005545273 ceph-mgr[75651]: [progress INFO root] complete: finished ev a3b094dd-b703-45b1-a600-dd5543626180 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec  4 05:16:32 np0005545273 ceph-mgr[75651]: [progress INFO root] Completed event a3b094dd-b703-45b1-a600-dd5543626180 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 4 seconds
Dec  4 05:16:32 np0005545273 ceph-mgr[75651]: [progress INFO root] complete: finished ev f0b8ae94-a712-4e37-a160-babe7e42db15 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec  4 05:16:32 np0005545273 ceph-mgr[75651]: [progress INFO root] Completed event f0b8ae94-a712-4e37-a160-babe7e42db15 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 3 seconds
Dec  4 05:16:32 np0005545273 ceph-mgr[75651]: [progress INFO root] complete: finished ev fe7e3f41-2f49-4445-9440-8b10495b4a6a (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec  4 05:16:32 np0005545273 ceph-mgr[75651]: [progress INFO root] Completed event fe7e3f41-2f49-4445-9440-8b10495b4a6a (PG autoscaler increasing pool 4 PGs from 1 to 32) in 2 seconds
Dec  4 05:16:32 np0005545273 ceph-mgr[75651]: [progress INFO root] complete: finished ev f9b75358-ee27-4a3c-ac3f-817e92e49fbe (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec  4 05:16:32 np0005545273 ceph-mgr[75651]: [progress INFO root] Completed event f9b75358-ee27-4a3c-ac3f-817e92e49fbe (PG autoscaler increasing pool 5 PGs from 1 to 32) in 1 seconds
Dec  4 05:16:32 np0005545273 ceph-mgr[75651]: [progress INFO root] complete: finished ev 0d70b3bf-3b35-43f6-8448-42122400e8e7 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Dec  4 05:16:32 np0005545273 ceph-mgr[75651]: [progress INFO root] Completed event 0d70b3bf-3b35-43f6-8448-42122400e8e7 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Dec  4 05:16:32 np0005545273 ceph-mgr[75651]: [progress INFO root] complete: finished ev a9acb27e-f811-43ae-b16b-6b6b4373fc73 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec  4 05:16:32 np0005545273 ceph-mgr[75651]: [progress INFO root] Completed event a9acb27e-f811-43ae-b16b-6b6b4373fc73 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Dec  4 05:16:32 np0005545273 lvm[94123]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:16:32 np0005545273 lvm[94126]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:16:32 np0005545273 lvm[94123]: VG ceph_vg0 finished
Dec  4 05:16:32 np0005545273 lvm[94126]: VG ceph_vg1 finished
Dec  4 05:16:32 np0005545273 lvm[94128]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:16:32 np0005545273 lvm[94128]: VG ceph_vg2 finished
Dec  4 05:16:33 np0005545273 lvm[94130]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:16:33 np0005545273 lvm[94130]: VG ceph_vg1 finished
Dec  4 05:16:33 np0005545273 lvm[94129]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:16:33 np0005545273 lvm[94129]: VG ceph_vg0 finished
Dec  4 05:16:33 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14240 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 05:16:33 np0005545273 ceph-mgr[75651]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Dec  4 05:16:33 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Dec  4 05:16:33 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec  4 05:16:33 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:33 np0005545273 beautiful_clarke[94039]: Scheduled mds.cephfs update...
Dec  4 05:16:33 np0005545273 systemd[1]: libpod-de14e8ba4d15b1e2a65465b55bf8179c34dd40c6e208a6a89a2eebadb3bcbaea.scope: Deactivated successfully.
Dec  4 05:16:33 np0005545273 podman[94015]: 2025-12-04 10:16:33.078776326 +0000 UTC m=+0.657843091 container died de14e8ba4d15b1e2a65465b55bf8179c34dd40c6e208a6a89a2eebadb3bcbaea (image=quay.io/ceph/ceph:v20, name=beautiful_clarke, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  4 05:16:33 np0005545273 inspiring_snyder[93996]: {}
Dec  4 05:16:33 np0005545273 systemd[1]: var-lib-containers-storage-overlay-b60109bb18cd21373232024db857353c9a82aa1e323b7fea219aceb290c7cfb4-merged.mount: Deactivated successfully.
Dec  4 05:16:33 np0005545273 podman[94015]: 2025-12-04 10:16:33.130201038 +0000 UTC m=+0.709267803 container remove de14e8ba4d15b1e2a65465b55bf8179c34dd40c6e208a6a89a2eebadb3bcbaea (image=quay.io/ceph/ceph:v20, name=beautiful_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:16:33 np0005545273 systemd[1]: libpod-c32bc1987925a5affdcad97e0ecb989ec58792ffadf8f6df4e2b40a095f23d83.scope: Deactivated successfully.
Dec  4 05:16:33 np0005545273 podman[93967]: 2025-12-04 10:16:33.138227403 +0000 UTC m=+1.037971623 container died c32bc1987925a5affdcad97e0ecb989ec58792ffadf8f6df4e2b40a095f23d83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_snyder, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  4 05:16:33 np0005545273 systemd[1]: libpod-c32bc1987925a5affdcad97e0ecb989ec58792ffadf8f6df4e2b40a095f23d83.scope: Consumed 1.422s CPU time.
Dec  4 05:16:33 np0005545273 systemd[1]: libpod-conmon-de14e8ba4d15b1e2a65465b55bf8179c34dd40c6e208a6a89a2eebadb3bcbaea.scope: Deactivated successfully.
Dec  4 05:16:33 np0005545273 systemd[1]: var-lib-containers-storage-overlay-e5a3d58ee7337ac5a194ac58981d6791e503228a88e5c562880300e562430f74-merged.mount: Deactivated successfully.
Dec  4 05:16:33 np0005545273 podman[93967]: 2025-12-04 10:16:33.184452548 +0000 UTC m=+1.084196768 container remove c32bc1987925a5affdcad97e0ecb989ec58792ffadf8f6df4e2b40a095f23d83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_snyder, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  4 05:16:33 np0005545273 systemd[1]: libpod-conmon-c32bc1987925a5affdcad97e0ecb989ec58792ffadf8f6df4e2b40a095f23d83.scope: Deactivated successfully.
Dec  4 05:16:33 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:16:33 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:33 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:16:33 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:16:33 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:33 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 40 pg[6.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=40 pruub=8.615765572s) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active pruub 79.052185059s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:33 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 40 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=40 pruub=13.555814743s) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active pruub 83.992263794s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:33 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 40 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=40 pruub=13.555814743s) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown pruub 83.992263794s@ mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:33 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 40 pg[6.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=40 pruub=8.615765572s) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown pruub 79.052185059s@ mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:33 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=40 pruub=14.472810745s) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active pruub 70.113349915s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:33 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=40 pruub=14.472810745s) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown pruub 70.113349915s@ mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:33 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Dec  4 05:16:33 np0005545273 python3[94319]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 05:16:33 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Dec  4 05:16:33 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec  4 05:16:33 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec  4 05:16:33 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec  4 05:16:33 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec  4 05:16:33 np0005545273 ceph-mon[75358]: Saving service mds.cephfs spec with placement compute-0
Dec  4 05:16:33 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:33 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:33 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:33 np0005545273 podman[94363]: 2025-12-04 10:16:33.982289626 +0000 UTC m=+0.064707946 container exec 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:16:34 np0005545273 podman[94363]: 2025-12-04 10:16:34.113586762 +0000 UTC m=+0.196005092 container exec_died 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:16:34 np0005545273 python3[94452]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764843393.5508635-36599-127084991356701/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=78fa63d8c69ed08876e15c6d423f4ac4e13914fe backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:16:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Dec  4 05:16:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Dec  4 05:16:34 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.1c( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.1f( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.10( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.17( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.8( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.a( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.b( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.6( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.e( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.d( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.18( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.1a( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.15( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.16( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.17( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.14( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.15( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.14( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.17( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.16( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.13( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.11( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.12( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.10( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.11( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.13( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.12( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.10( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.1b( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.f( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.d( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.e( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.c( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.d( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.c( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.f( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.e( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.2( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.2( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.1( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.3( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.3( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.1( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.1b( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.19( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.4( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.6( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.9( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.b( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.1a( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.18( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.5( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.7( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.a( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.8( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.1b( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.6( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.19( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.4( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.b( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.7( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.8( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.9( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.a( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.1c( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.1e( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.1f( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.5( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.1d( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.1e( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.1c( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.1f( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.1d( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.1c( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.1f( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.10( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.17( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.8( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.a( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.b( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=40/41 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.6( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.d( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.e( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.1a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.15( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.18( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 41 pg[5.1b( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [2] r=0 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.14( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.15( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.16( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.17( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.17( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.14( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.16( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.13( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.12( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.11( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.10( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.13( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.10( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.12( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.11( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.f( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.d( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.e( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.c( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.d( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.0( empty local-lis/les=40/41 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.0( empty local-lis/les=40/41 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.2( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.1( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.2( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.3( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.19( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.3( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.1b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.4( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.6( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.9( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.5( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.18( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.1a( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.7( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.8( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.1( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.a( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.1b( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.4( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.19( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.b( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.7( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.8( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.9( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.1e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.1c( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.5( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.1c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.1e( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.1d( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.1f( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[6.1d( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [0] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 41 pg[4.6( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:34 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Dec  4 05:16:34 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Dec  4 05:16:34 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v96: 162 pgs: 1 peering, 124 unknown, 37 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:16:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Dec  4 05:16:34 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Dec  4 05:16:34 np0005545273 python3[94613]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:16:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:16:34 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:16:34 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:16:34 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:16:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:16:34 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:16:34 np0005545273 podman[94632]: 2025-12-04 10:16:34.840626526 +0000 UTC m=+0.043088110 container create 3c4bb7624cb117de1c90470636f7167a9bee94f6d4dfc2a76ae2413bdcaf368c (image=quay.io/ceph/ceph:v20, name=crazy_sinoussi, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:16:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:16:34 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:16:34 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:16:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:16:34 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:16:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:16:34 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:16:34 np0005545273 systemd[76741]: Starting Mark boot as successful...
Dec  4 05:16:34 np0005545273 systemd[1]: Started libpod-conmon-3c4bb7624cb117de1c90470636f7167a9bee94f6d4dfc2a76ae2413bdcaf368c.scope.
Dec  4 05:16:34 np0005545273 systemd[76741]: Finished Mark boot as successful.
Dec  4 05:16:34 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:34 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb807216214b36b617ad45464160a5e56536b700032c0ea6cc1694a6b66d628f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:34 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb807216214b36b617ad45464160a5e56536b700032c0ea6cc1694a6b66d628f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:34 np0005545273 podman[94632]: 2025-12-04 10:16:34.819575134 +0000 UTC m=+0.022036698 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:16:34 np0005545273 podman[94632]: 2025-12-04 10:16:34.936720555 +0000 UTC m=+0.139182149 container init 3c4bb7624cb117de1c90470636f7167a9bee94f6d4dfc2a76ae2413bdcaf368c (image=quay.io/ceph/ceph:v20, name=crazy_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:16:34 np0005545273 podman[94632]: 2025-12-04 10:16:34.951271049 +0000 UTC m=+0.153732623 container start 3c4bb7624cb117de1c90470636f7167a9bee94f6d4dfc2a76ae2413bdcaf368c (image=quay.io/ceph/ceph:v20, name=crazy_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030)
Dec  4 05:16:34 np0005545273 podman[94632]: 2025-12-04 10:16:34.95581201 +0000 UTC m=+0.158273594 container attach 3c4bb7624cb117de1c90470636f7167a9bee94f6d4dfc2a76ae2413bdcaf368c (image=quay.io/ceph/ceph:v20, name=crazy_sinoussi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec  4 05:16:35 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Dec  4 05:16:35 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec  4 05:16:35 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Dec  4 05:16:35 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Dec  4 05:16:35 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Dec  4 05:16:35 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:35 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:35 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:16:35 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:35 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:16:35 np0005545273 podman[94734]: 2025-12-04 10:16:35.329509454 +0000 UTC m=+0.056619698 container create 5d1afebf8b4a9c1b5495a66f435a3760118de8f9ef78dabc8674ff7ec127e955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_payne, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  4 05:16:35 np0005545273 systemd[1]: Started libpod-conmon-5d1afebf8b4a9c1b5495a66f435a3760118de8f9ef78dabc8674ff7ec127e955.scope.
Dec  4 05:16:35 np0005545273 podman[94734]: 2025-12-04 10:16:35.304876866 +0000 UTC m=+0.031987130 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:16:35 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:35 np0005545273 podman[94734]: 2025-12-04 10:16:35.432707256 +0000 UTC m=+0.159817510 container init 5d1afebf8b4a9c1b5495a66f435a3760118de8f9ef78dabc8674ff7ec127e955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_payne, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:16:35 np0005545273 podman[94734]: 2025-12-04 10:16:35.441259394 +0000 UTC m=+0.168369668 container start 5d1afebf8b4a9c1b5495a66f435a3760118de8f9ef78dabc8674ff7ec127e955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_payne, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec  4 05:16:35 np0005545273 podman[94734]: 2025-12-04 10:16:35.445651411 +0000 UTC m=+0.172761675 container attach 5d1afebf8b4a9c1b5495a66f435a3760118de8f9ef78dabc8674ff7ec127e955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_payne, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:16:35 np0005545273 clever_payne[94750]: 167 167
Dec  4 05:16:35 np0005545273 systemd[1]: libpod-5d1afebf8b4a9c1b5495a66f435a3760118de8f9ef78dabc8674ff7ec127e955.scope: Deactivated successfully.
Dec  4 05:16:35 np0005545273 conmon[94750]: conmon 5d1afebf8b4a9c1b5495 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5d1afebf8b4a9c1b5495a66f435a3760118de8f9ef78dabc8674ff7ec127e955.scope/container/memory.events
Dec  4 05:16:35 np0005545273 podman[94734]: 2025-12-04 10:16:35.451473224 +0000 UTC m=+0.178583518 container died 5d1afebf8b4a9c1b5495a66f435a3760118de8f9ef78dabc8674ff7ec127e955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_payne, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:16:35 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0)
Dec  4 05:16:35 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1896971816' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Dec  4 05:16:35 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1896971816' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec  4 05:16:35 np0005545273 systemd[1]: var-lib-containers-storage-overlay-e17cfb3fd095b1a8234a7cdfdd1088723414941090c93d536a7d8c4bf260dd79-merged.mount: Deactivated successfully.
Dec  4 05:16:35 np0005545273 systemd[1]: libpod-3c4bb7624cb117de1c90470636f7167a9bee94f6d4dfc2a76ae2413bdcaf368c.scope: Deactivated successfully.
Dec  4 05:16:35 np0005545273 podman[94734]: 2025-12-04 10:16:35.50186142 +0000 UTC m=+0.228971664 container remove 5d1afebf8b4a9c1b5495a66f435a3760118de8f9ef78dabc8674ff7ec127e955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_payne, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:16:35 np0005545273 podman[94632]: 2025-12-04 10:16:35.502606918 +0000 UTC m=+0.705068472 container died 3c4bb7624cb117de1c90470636f7167a9bee94f6d4dfc2a76ae2413bdcaf368c (image=quay.io/ceph/ceph:v20, name=crazy_sinoussi, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:16:35 np0005545273 systemd[1]: libpod-conmon-5d1afebf8b4a9c1b5495a66f435a3760118de8f9ef78dabc8674ff7ec127e955.scope: Deactivated successfully.
Dec  4 05:16:35 np0005545273 systemd[1]: var-lib-containers-storage-overlay-fb807216214b36b617ad45464160a5e56536b700032c0ea6cc1694a6b66d628f-merged.mount: Deactivated successfully.
Dec  4 05:16:35 np0005545273 podman[94632]: 2025-12-04 10:16:35.550034922 +0000 UTC m=+0.752496476 container remove 3c4bb7624cb117de1c90470636f7167a9bee94f6d4dfc2a76ae2413bdcaf368c (image=quay.io/ceph/ceph:v20, name=crazy_sinoussi, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Dec  4 05:16:35 np0005545273 systemd[1]: libpod-conmon-3c4bb7624cb117de1c90470636f7167a9bee94f6d4dfc2a76ae2413bdcaf368c.scope: Deactivated successfully.
Dec  4 05:16:35 np0005545273 podman[94788]: 2025-12-04 10:16:35.663694528 +0000 UTC m=+0.045975260 container create 1843b86e6c979500e126699038025064bd45f19c3f558fe85209d3c347a51bd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:16:35 np0005545273 systemd[1]: Started libpod-conmon-1843b86e6c979500e126699038025064bd45f19c3f558fe85209d3c347a51bd5.scope.
Dec  4 05:16:35 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:35 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b60fc02371c1647d95a9f2053e81bcb050d74be54e89896c74e5af5b023fe975/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:35 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b60fc02371c1647d95a9f2053e81bcb050d74be54e89896c74e5af5b023fe975/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:35 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b60fc02371c1647d95a9f2053e81bcb050d74be54e89896c74e5af5b023fe975/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:35 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b60fc02371c1647d95a9f2053e81bcb050d74be54e89896c74e5af5b023fe975/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:35 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b60fc02371c1647d95a9f2053e81bcb050d74be54e89896c74e5af5b023fe975/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:35 np0005545273 podman[94788]: 2025-12-04 10:16:35.641789145 +0000 UTC m=+0.024069907 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:16:35 np0005545273 podman[94788]: 2025-12-04 10:16:35.748983354 +0000 UTC m=+0.131264116 container init 1843b86e6c979500e126699038025064bd45f19c3f558fe85209d3c347a51bd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_satoshi, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  4 05:16:35 np0005545273 podman[94788]: 2025-12-04 10:16:35.756414115 +0000 UTC m=+0.138694847 container start 1843b86e6c979500e126699038025064bd45f19c3f558fe85209d3c347a51bd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:16:35 np0005545273 podman[94788]: 2025-12-04 10:16:35.760162346 +0000 UTC m=+0.142443088 container attach 1843b86e6c979500e126699038025064bd45f19c3f558fe85209d3c347a51bd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Dec  4 05:16:36 np0005545273 python3[94845]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:16:36 np0005545273 angry_satoshi[94805]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:16:36 np0005545273 angry_satoshi[94805]: --> All data devices are unavailable
Dec  4 05:16:36 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec  4 05:16:36 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/1896971816' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Dec  4 05:16:36 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/1896971816' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec  4 05:16:36 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 42 pg[7.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=42 pruub=15.636316299s) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active pruub 82.768257141s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:36 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 42 pg[7.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=42 pruub=15.636316299s) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown pruub 82.768257141s@ mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:36 np0005545273 systemd[1]: libpod-1843b86e6c979500e126699038025064bd45f19c3f558fe85209d3c347a51bd5.scope: Deactivated successfully.
Dec  4 05:16:36 np0005545273 podman[94788]: 2025-12-04 10:16:36.333986652 +0000 UTC m=+0.716267384 container died 1843b86e6c979500e126699038025064bd45f19c3f558fe85209d3c347a51bd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  4 05:16:36 np0005545273 podman[94852]: 2025-12-04 10:16:36.349675444 +0000 UTC m=+0.053208356 container create 4981556bf5de83e1a0c06bb1845e4b52baf90671b9d6bdd6c231ef492100a8cf (image=quay.io/ceph/ceph:v20, name=upbeat_hamilton, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:16:36 np0005545273 systemd[1]: var-lib-containers-storage-overlay-b60fc02371c1647d95a9f2053e81bcb050d74be54e89896c74e5af5b023fe975-merged.mount: Deactivated successfully.
Dec  4 05:16:36 np0005545273 systemd[1]: Started libpod-conmon-4981556bf5de83e1a0c06bb1845e4b52baf90671b9d6bdd6c231ef492100a8cf.scope.
Dec  4 05:16:36 np0005545273 podman[94788]: 2025-12-04 10:16:36.395062328 +0000 UTC m=+0.777343060 container remove 1843b86e6c979500e126699038025064bd45f19c3f558fe85209d3c347a51bd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_satoshi, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:16:36 np0005545273 systemd[1]: libpod-conmon-1843b86e6c979500e126699038025064bd45f19c3f558fe85209d3c347a51bd5.scope: Deactivated successfully.
Dec  4 05:16:36 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:36 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79cdced4d29d969880fa50369ad7377ef32b31c7dc6041813fefcc3b9d36e2ac/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:36 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79cdced4d29d969880fa50369ad7377ef32b31c7dc6041813fefcc3b9d36e2ac/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:36 np0005545273 podman[94852]: 2025-12-04 10:16:36.321063498 +0000 UTC m=+0.024596410 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:16:36 np0005545273 podman[94852]: 2025-12-04 10:16:36.433906194 +0000 UTC m=+0.137439156 container init 4981556bf5de83e1a0c06bb1845e4b52baf90671b9d6bdd6c231ef492100a8cf (image=quay.io/ceph/ceph:v20, name=upbeat_hamilton, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec  4 05:16:36 np0005545273 podman[94852]: 2025-12-04 10:16:36.441393766 +0000 UTC m=+0.144926678 container start 4981556bf5de83e1a0c06bb1845e4b52baf90671b9d6bdd6c231ef492100a8cf (image=quay.io/ceph/ceph:v20, name=upbeat_hamilton, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:16:36 np0005545273 podman[94852]: 2025-12-04 10:16:36.44483382 +0000 UTC m=+0.148366732 container attach 4981556bf5de83e1a0c06bb1845e4b52baf90671b9d6bdd6c231ef492100a8cf (image=quay.io/ceph/ceph:v20, name=upbeat_hamilton, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:16:36 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v98: 193 pgs: 93 unknown, 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:16:36 np0005545273 podman[94966]: 2025-12-04 10:16:36.82203007 +0000 UTC m=+0.040829275 container create 65b2ec1b13a89175396e3fedc237c3bd38cfc0de1f78b805f19bb6e08171907e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:16:36 np0005545273 systemd[1]: Started libpod-conmon-65b2ec1b13a89175396e3fedc237c3bd38cfc0de1f78b805f19bb6e08171907e.scope.
Dec  4 05:16:36 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:36 np0005545273 podman[94966]: 2025-12-04 10:16:36.803595921 +0000 UTC m=+0.022395126 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:16:36 np0005545273 podman[94966]: 2025-12-04 10:16:36.899935536 +0000 UTC m=+0.118734771 container init 65b2ec1b13a89175396e3fedc237c3bd38cfc0de1f78b805f19bb6e08171907e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bartik, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:16:36 np0005545273 podman[94966]: 2025-12-04 10:16:36.909847987 +0000 UTC m=+0.128647192 container start 65b2ec1b13a89175396e3fedc237c3bd38cfc0de1f78b805f19bb6e08171907e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bartik, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3)
Dec  4 05:16:36 np0005545273 laughing_bartik[94983]: 167 167
Dec  4 05:16:36 np0005545273 systemd[1]: libpod-65b2ec1b13a89175396e3fedc237c3bd38cfc0de1f78b805f19bb6e08171907e.scope: Deactivated successfully.
Dec  4 05:16:36 np0005545273 podman[94966]: 2025-12-04 10:16:36.91450292 +0000 UTC m=+0.133302155 container attach 65b2ec1b13a89175396e3fedc237c3bd38cfc0de1f78b805f19bb6e08171907e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bartik, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec  4 05:16:36 np0005545273 podman[94966]: 2025-12-04 10:16:36.915015693 +0000 UTC m=+0.133814898 container died 65b2ec1b13a89175396e3fedc237c3bd38cfc0de1f78b805f19bb6e08171907e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bartik, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec  4 05:16:36 np0005545273 systemd[1]: var-lib-containers-storage-overlay-d52611072ca93095410e1653f18e792ee0ed55c848a82fb7b40ace31bc026f27-merged.mount: Deactivated successfully.
Dec  4 05:16:36 np0005545273 podman[94966]: 2025-12-04 10:16:36.954238428 +0000 UTC m=+0.173037633 container remove 65b2ec1b13a89175396e3fedc237c3bd38cfc0de1f78b805f19bb6e08171907e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:16:36 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec  4 05:16:36 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2417251768' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec  4 05:16:36 np0005545273 upbeat_hamilton[94882]: 
Dec  4 05:16:36 np0005545273 upbeat_hamilton[94882]: {"fsid":"f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":150,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":42,"num_osds":3,"num_up_osds":3,"osd_up_since":1764843345,"num_in_osds":3,"osd_in_since":1764843314,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"unknown","count":124},{"state_name":"active+clean","count":37},{"state_name":"peering","count":1}],"num_pgs":162,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83996672,"bytes_avail":64327929856,"bytes_total":64411926528,"unknown_pgs_ratio":0.76543211936950684,"inactive_pgs_ratio":0.0061728395521640778},"fsmap":{"epoch":2,"btime":"2025-12-04T10:16:31:947702+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-04T10:15:28.674444+0000","services":{}},"progress_events":{"e8fbb843-ac01-485d-b1b9-727e8a8c205a":{"message":"Global Recovery Event (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Dec  4 05:16:36 np0005545273 systemd[1]: libpod-conmon-65b2ec1b13a89175396e3fedc237c3bd38cfc0de1f78b805f19bb6e08171907e.scope: Deactivated successfully.
Dec  4 05:16:36 np0005545273 systemd[1]: libpod-4981556bf5de83e1a0c06bb1845e4b52baf90671b9d6bdd6c231ef492100a8cf.scope: Deactivated successfully.
Dec  4 05:16:36 np0005545273 podman[94852]: 2025-12-04 10:16:36.984585987 +0000 UTC m=+0.688118969 container died 4981556bf5de83e1a0c06bb1845e4b52baf90671b9d6bdd6c231ef492100a8cf (image=quay.io/ceph/ceph:v20, name=upbeat_hamilton, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  4 05:16:37 np0005545273 systemd[1]: var-lib-containers-storage-overlay-79cdced4d29d969880fa50369ad7377ef32b31c7dc6041813fefcc3b9d36e2ac-merged.mount: Deactivated successfully.
Dec  4 05:16:37 np0005545273 podman[94852]: 2025-12-04 10:16:37.039077042 +0000 UTC m=+0.742609964 container remove 4981556bf5de83e1a0c06bb1845e4b52baf90671b9d6bdd6c231ef492100a8cf (image=quay.io/ceph/ceph:v20, name=upbeat_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec  4 05:16:37 np0005545273 systemd[1]: libpod-conmon-4981556bf5de83e1a0c06bb1845e4b52baf90671b9d6bdd6c231ef492100a8cf.scope: Deactivated successfully.
Dec  4 05:16:37 np0005545273 podman[95024]: 2025-12-04 10:16:37.155440475 +0000 UTC m=+0.060266179 container create 09fff0ac552dc154191ede50c34681c71338575f9fb751d57e23f5fb04b6bac0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_poincare, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:16:37 np0005545273 systemd[1]: Started libpod-conmon-09fff0ac552dc154191ede50c34681c71338575f9fb751d57e23f5fb04b6bac0.scope.
Dec  4 05:16:37 np0005545273 podman[95024]: 2025-12-04 10:16:37.126887189 +0000 UTC m=+0.031712893 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:16:37 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:37 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a108f8dd66c449890e461193ff6f5dfecbdd7ceb2b2331536862b231b613852d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:37 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a108f8dd66c449890e461193ff6f5dfecbdd7ceb2b2331536862b231b613852d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:37 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a108f8dd66c449890e461193ff6f5dfecbdd7ceb2b2331536862b231b613852d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:37 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a108f8dd66c449890e461193ff6f5dfecbdd7ceb2b2331536862b231b613852d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:37 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Dec  4 05:16:37 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Dec  4 05:16:37 np0005545273 podman[95024]: 2025-12-04 10:16:37.314042955 +0000 UTC m=+0.218868669 container init 09fff0ac552dc154191ede50c34681c71338575f9fb751d57e23f5fb04b6bac0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec  4 05:16:37 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.1c( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.13( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.1e( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.1d( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.12( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.10( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.11( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.17( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.16( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.15( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.14( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.b( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.a( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.9( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.8( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.f( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.4( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.6( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.5( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.7( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.1( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.2( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.3( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.c( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.d( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.e( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.1f( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.18( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.19( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.1a( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.1b( empty local-lis/les=27/28 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:37 np0005545273 podman[95024]: 2025-12-04 10:16:37.32492424 +0000 UTC m=+0.229749954 container start 09fff0ac552dc154191ede50c34681c71338575f9fb751d57e23f5fb04b6bac0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.1c( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.13( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.12( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.1d( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.1e( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.10( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.11( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.17( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.15( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.16( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.14( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.b( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.9( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.a( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.f( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.0( empty local-lis/les=42/43 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.4( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.8( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.5( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.6( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.1( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.7( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:37 np0005545273 podman[95024]: 2025-12-04 10:16:37.332482703 +0000 UTC m=+0.237308437 container attach 09fff0ac552dc154191ede50c34681c71338575f9fb751d57e23f5fb04b6bac0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_poincare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.2( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.3( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.d( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.c( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.1f( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.e( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.18( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.19( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.1a( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[7.1b( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=27/27 les/c/f=28/28/0 sis=42) [1] r=0 lpr=42 pi=[27,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:37 np0005545273 python3[95069]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:16:37 np0005545273 podman[95072]: 2025-12-04 10:16:37.566836967 +0000 UTC m=+0.054684232 container create ef28b85f794c3dffbf4c16fd0c842f32316fe078b522e7695f4d010f705b95f1 (image=quay.io/ceph/ceph:v20, name=flamboyant_wilson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  4 05:16:37 np0005545273 systemd[1]: Started libpod-conmon-ef28b85f794c3dffbf4c16fd0c842f32316fe078b522e7695f4d010f705b95f1.scope.
Dec  4 05:16:37 np0005545273 podman[95072]: 2025-12-04 10:16:37.538952538 +0000 UTC m=+0.026799873 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:16:37 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:37 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1639a6bdb8489828db9cd5ef06923d5676c4e496781122689ab63bc77a888271/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:37 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1639a6bdb8489828db9cd5ef06923d5676c4e496781122689ab63bc77a888271/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:37 np0005545273 podman[95072]: 2025-12-04 10:16:37.665945099 +0000 UTC m=+0.153792364 container init ef28b85f794c3dffbf4c16fd0c842f32316fe078b522e7695f4d010f705b95f1 (image=quay.io/ceph/ceph:v20, name=flamboyant_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:16:37 np0005545273 podman[95072]: 2025-12-04 10:16:37.674366734 +0000 UTC m=+0.162213999 container start ef28b85f794c3dffbf4c16fd0c842f32316fe078b522e7695f4d010f705b95f1 (image=quay.io/ceph/ceph:v20, name=flamboyant_wilson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:16:37 np0005545273 podman[95072]: 2025-12-04 10:16:37.679383636 +0000 UTC m=+0.167230911 container attach ef28b85f794c3dffbf4c16fd0c842f32316fe078b522e7695f4d010f705b95f1 (image=quay.io/ceph/ceph:v20, name=flamboyant_wilson, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec  4 05:16:37 np0005545273 brave_poincare[95061]: {
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:    "0": [
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:        {
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            "devices": [
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "/dev/loop3"
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            ],
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            "lv_name": "ceph_lv0",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            "lv_size": "21470642176",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            "name": "ceph_lv0",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            "tags": {
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.cluster_name": "ceph",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.crush_device_class": "",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.encrypted": "0",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.objectstore": "bluestore",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.osd_id": "0",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.type": "block",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.vdo": "0",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.with_tpm": "0"
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            },
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            "type": "block",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            "vg_name": "ceph_vg0"
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:        }
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:    ],
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:    "1": [
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:        {
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            "devices": [
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "/dev/loop4"
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            ],
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            "lv_name": "ceph_lv1",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            "lv_size": "21470642176",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            "name": "ceph_lv1",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            "tags": {
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.cluster_name": "ceph",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.crush_device_class": "",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.encrypted": "0",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.objectstore": "bluestore",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.osd_id": "1",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.type": "block",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.vdo": "0",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.with_tpm": "0"
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            },
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            "type": "block",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            "vg_name": "ceph_vg1"
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:        }
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:    ],
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:    "2": [
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:        {
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            "devices": [
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "/dev/loop5"
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            ],
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            "lv_name": "ceph_lv2",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            "lv_size": "21470642176",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            "name": "ceph_lv2",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            "tags": {
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.cluster_name": "ceph",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.crush_device_class": "",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.encrypted": "0",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.objectstore": "bluestore",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.osd_id": "2",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.type": "block",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.vdo": "0",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:                "ceph.with_tpm": "0"
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            },
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            "type": "block",
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:            "vg_name": "ceph_vg2"
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:        }
Dec  4 05:16:37 np0005545273 brave_poincare[95061]:    ]
Dec  4 05:16:37 np0005545273 brave_poincare[95061]: }
Dec  4 05:16:37 np0005545273 systemd[1]: libpod-09fff0ac552dc154191ede50c34681c71338575f9fb751d57e23f5fb04b6bac0.scope: Deactivated successfully.
Dec  4 05:16:37 np0005545273 podman[95024]: 2025-12-04 10:16:37.719513552 +0000 UTC m=+0.624339236 container died 09fff0ac552dc154191ede50c34681c71338575f9fb751d57e23f5fb04b6bac0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_poincare, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  4 05:16:37 np0005545273 ceph-mgr[75651]: [progress INFO root] Writing back 9 completed events
Dec  4 05:16:37 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  4 05:16:38 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v100: 193 pgs: 1 peering, 31 unknown, 161 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:16:38 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Dec  4 05:16:38 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Dec  4 05:16:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:16:39 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/707924097' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:16:39 np0005545273 flamboyant_wilson[95092]: 
Dec  4 05:16:39 np0005545273 flamboyant_wilson[95092]: {"epoch":1,"fsid":"f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d","modified":"2025-12-04T10:14:01.294217Z","created":"2025-12-04T10:14:01.294217Z","min_mon_release":20,"min_mon_release_name":"tentacle","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid","tentacle"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Dec  4 05:16:39 np0005545273 flamboyant_wilson[95092]: dumped monmap epoch 1
Dec  4 05:16:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:16:39 np0005545273 systemd[1]: var-lib-containers-storage-overlay-a108f8dd66c449890e461193ff6f5dfecbdd7ceb2b2331536862b231b613852d-merged.mount: Deactivated successfully.
Dec  4 05:16:39 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:39 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Dec  4 05:16:39 np0005545273 systemd[1]: libpod-ef28b85f794c3dffbf4c16fd0c842f32316fe078b522e7695f4d010f705b95f1.scope: Deactivated successfully.
Dec  4 05:16:39 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Dec  4 05:16:39 np0005545273 podman[95072]: 2025-12-04 10:16:39.601427924 +0000 UTC m=+2.089275219 container died ef28b85f794c3dffbf4c16fd0c842f32316fe078b522e7695f4d010f705b95f1 (image=quay.io/ceph/ceph:v20, name=flamboyant_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:16:39 np0005545273 podman[95024]: 2025-12-04 10:16:39.635045474 +0000 UTC m=+2.539871148 container remove 09fff0ac552dc154191ede50c34681c71338575f9fb751d57e23f5fb04b6bac0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:16:39 np0005545273 systemd[1]: var-lib-containers-storage-overlay-1639a6bdb8489828db9cd5ef06923d5676c4e496781122689ab63bc77a888271-merged.mount: Deactivated successfully.
Dec  4 05:16:39 np0005545273 podman[95072]: 2025-12-04 10:16:39.676049831 +0000 UTC m=+2.163897086 container remove ef28b85f794c3dffbf4c16fd0c842f32316fe078b522e7695f4d010f705b95f1 (image=quay.io/ceph/ceph:v20, name=flamboyant_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec  4 05:16:39 np0005545273 systemd[1]: libpod-conmon-ef28b85f794c3dffbf4c16fd0c842f32316fe078b522e7695f4d010f705b95f1.scope: Deactivated successfully.
Dec  4 05:16:39 np0005545273 systemd[1]: libpod-conmon-09fff0ac552dc154191ede50c34681c71338575f9fb751d57e23f5fb04b6bac0.scope: Deactivated successfully.
Dec  4 05:16:40 np0005545273 podman[95225]: 2025-12-04 10:16:40.170337491 +0000 UTC m=+0.062637866 container create dd04e381e2930b99a46d6fc636b4b2ddf84a84fac31898243e420d310b5456d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_franklin, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:16:40 np0005545273 systemd[1]: Started libpod-conmon-dd04e381e2930b99a46d6fc636b4b2ddf84a84fac31898243e420d310b5456d3.scope.
Dec  4 05:16:40 np0005545273 podman[95225]: 2025-12-04 10:16:40.143950409 +0000 UTC m=+0.036250854 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:16:40 np0005545273 python3[95227]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:16:40 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:40 np0005545273 podman[95225]: 2025-12-04 10:16:40.279651882 +0000 UTC m=+0.171952277 container init dd04e381e2930b99a46d6fc636b4b2ddf84a84fac31898243e420d310b5456d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_franklin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  4 05:16:40 np0005545273 podman[95225]: 2025-12-04 10:16:40.286629592 +0000 UTC m=+0.178929967 container start dd04e381e2930b99a46d6fc636b4b2ddf84a84fac31898243e420d310b5456d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_franklin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:16:40 np0005545273 podman[95225]: 2025-12-04 10:16:40.289646375 +0000 UTC m=+0.181946750 container attach dd04e381e2930b99a46d6fc636b4b2ddf84a84fac31898243e420d310b5456d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:16:40 np0005545273 funny_franklin[95243]: 167 167
Dec  4 05:16:40 np0005545273 systemd[1]: libpod-dd04e381e2930b99a46d6fc636b4b2ddf84a84fac31898243e420d310b5456d3.scope: Deactivated successfully.
Dec  4 05:16:40 np0005545273 podman[95225]: 2025-12-04 10:16:40.291864419 +0000 UTC m=+0.184164804 container died dd04e381e2930b99a46d6fc636b4b2ddf84a84fac31898243e420d310b5456d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_franklin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec  4 05:16:40 np0005545273 podman[95246]: 2025-12-04 10:16:40.312445449 +0000 UTC m=+0.050336146 container create 24fe4e966195b2cbfa527e00143f908d81d766a0f13d9fbbe89acb9a76a0ec93 (image=quay.io/ceph/ceph:v20, name=friendly_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Dec  4 05:16:40 np0005545273 systemd[1]: var-lib-containers-storage-overlay-8c40fa58852bd049868307ae6738ee77b2d6b93cc992f4cb7ed26e58473b93a3-merged.mount: Deactivated successfully.
Dec  4 05:16:40 np0005545273 podman[95225]: 2025-12-04 10:16:40.3531515 +0000 UTC m=+0.245451875 container remove dd04e381e2930b99a46d6fc636b4b2ddf84a84fac31898243e420d310b5456d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:16:40 np0005545273 systemd[1]: Started libpod-conmon-24fe4e966195b2cbfa527e00143f908d81d766a0f13d9fbbe89acb9a76a0ec93.scope.
Dec  4 05:16:40 np0005545273 systemd[1]: libpod-conmon-dd04e381e2930b99a46d6fc636b4b2ddf84a84fac31898243e420d310b5456d3.scope: Deactivated successfully.
Dec  4 05:16:40 np0005545273 podman[95246]: 2025-12-04 10:16:40.287302537 +0000 UTC m=+0.025193254 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:16:40 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:40 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d177edd07e8db95ed90229ef8d125a77b700b3bd72e2f4609a177af32fb07ff/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:40 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d177edd07e8db95ed90229ef8d125a77b700b3bd72e2f4609a177af32fb07ff/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:40 np0005545273 podman[95246]: 2025-12-04 10:16:40.414206216 +0000 UTC m=+0.152096933 container init 24fe4e966195b2cbfa527e00143f908d81d766a0f13d9fbbe89acb9a76a0ec93 (image=quay.io/ceph/ceph:v20, name=friendly_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec  4 05:16:40 np0005545273 podman[95246]: 2025-12-04 10:16:40.425420739 +0000 UTC m=+0.163311436 container start 24fe4e966195b2cbfa527e00143f908d81d766a0f13d9fbbe89acb9a76a0ec93 (image=quay.io/ceph/ceph:v20, name=friendly_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Dec  4 05:16:40 np0005545273 podman[95246]: 2025-12-04 10:16:40.432078212 +0000 UTC m=+0.169968909 container attach 24fe4e966195b2cbfa527e00143f908d81d766a0f13d9fbbe89acb9a76a0ec93 (image=quay.io/ceph/ceph:v20, name=friendly_chandrasekhar, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:16:40 np0005545273 podman[95287]: 2025-12-04 10:16:40.51500592 +0000 UTC m=+0.036501470 container create 7f48ac99b93d3b5cc2a0a666b1aa2eeac433ec01eebe16b2f41255e6a68f5c37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:16:40 np0005545273 systemd[1]: Started libpod-conmon-7f48ac99b93d3b5cc2a0a666b1aa2eeac433ec01eebe16b2f41255e6a68f5c37.scope.
Dec  4 05:16:40 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:40 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.a scrub starts
Dec  4 05:16:40 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0969cadf63d8fe9331e8c2a42c9a51c178a5699fa09f77c7e98209d584ee33bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:40 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0969cadf63d8fe9331e8c2a42c9a51c178a5699fa09f77c7e98209d584ee33bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:40 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0969cadf63d8fe9331e8c2a42c9a51c178a5699fa09f77c7e98209d584ee33bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:40 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0969cadf63d8fe9331e8c2a42c9a51c178a5699fa09f77c7e98209d584ee33bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:40 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.a scrub ok
Dec  4 05:16:40 np0005545273 podman[95287]: 2025-12-04 10:16:40.498075178 +0000 UTC m=+0.019570748 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:16:40 np0005545273 podman[95287]: 2025-12-04 10:16:40.602623782 +0000 UTC m=+0.124119372 container init 7f48ac99b93d3b5cc2a0a666b1aa2eeac433ec01eebe16b2f41255e6a68f5c37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:16:40 np0005545273 podman[95287]: 2025-12-04 10:16:40.613827995 +0000 UTC m=+0.135323545 container start 7f48ac99b93d3b5cc2a0a666b1aa2eeac433ec01eebe16b2f41255e6a68f5c37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:16:40 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:40 np0005545273 podman[95287]: 2025-12-04 10:16:40.6230874 +0000 UTC m=+0.144582970 container attach 7f48ac99b93d3b5cc2a0a666b1aa2eeac433ec01eebe16b2f41255e6a68f5c37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:16:40 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v101: 193 pgs: 1 peering, 31 unknown, 161 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:16:40 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Dec  4 05:16:40 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Dec  4 05:16:40 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Dec  4 05:16:40 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2551733214' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Dec  4 05:16:40 np0005545273 friendly_chandrasekhar[95278]: [client.openstack]
Dec  4 05:16:40 np0005545273 friendly_chandrasekhar[95278]: #011key = AQC7XjFpAAAAABAAfAp/GPFiYDh+96uFEDn7ew==
Dec  4 05:16:40 np0005545273 friendly_chandrasekhar[95278]: #011caps mgr = "allow *"
Dec  4 05:16:40 np0005545273 friendly_chandrasekhar[95278]: #011caps mon = "profile rbd"
Dec  4 05:16:40 np0005545273 friendly_chandrasekhar[95278]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Dec  4 05:16:40 np0005545273 systemd[1]: libpod-24fe4e966195b2cbfa527e00143f908d81d766a0f13d9fbbe89acb9a76a0ec93.scope: Deactivated successfully.
Dec  4 05:16:40 np0005545273 podman[95246]: 2025-12-04 10:16:40.994940741 +0000 UTC m=+0.732831438 container died 24fe4e966195b2cbfa527e00143f908d81d766a0f13d9fbbe89acb9a76a0ec93 (image=quay.io/ceph/ceph:v20, name=friendly_chandrasekhar, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:16:41 np0005545273 systemd[1]: var-lib-containers-storage-overlay-0d177edd07e8db95ed90229ef8d125a77b700b3bd72e2f4609a177af32fb07ff-merged.mount: Deactivated successfully.
Dec  4 05:16:41 np0005545273 podman[95246]: 2025-12-04 10:16:41.036494541 +0000 UTC m=+0.774385238 container remove 24fe4e966195b2cbfa527e00143f908d81d766a0f13d9fbbe89acb9a76a0ec93 (image=quay.io/ceph/ceph:v20, name=friendly_chandrasekhar, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec  4 05:16:41 np0005545273 systemd[1]: libpod-conmon-24fe4e966195b2cbfa527e00143f908d81d766a0f13d9fbbe89acb9a76a0ec93.scope: Deactivated successfully.
Dec  4 05:16:41 np0005545273 lvm[95415]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:16:41 np0005545273 lvm[95415]: VG ceph_vg1 finished
Dec  4 05:16:41 np0005545273 lvm[95414]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:16:41 np0005545273 lvm[95414]: VG ceph_vg0 finished
Dec  4 05:16:41 np0005545273 lvm[95417]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:16:41 np0005545273 lvm[95417]: VG ceph_vg2 finished
Dec  4 05:16:41 np0005545273 hopeful_mirzakhani[95306]: {}
Dec  4 05:16:41 np0005545273 systemd[1]: libpod-7f48ac99b93d3b5cc2a0a666b1aa2eeac433ec01eebe16b2f41255e6a68f5c37.scope: Deactivated successfully.
Dec  4 05:16:41 np0005545273 systemd[1]: libpod-7f48ac99b93d3b5cc2a0a666b1aa2eeac433ec01eebe16b2f41255e6a68f5c37.scope: Consumed 1.555s CPU time.
Dec  4 05:16:41 np0005545273 podman[95287]: 2025-12-04 10:16:41.567803833 +0000 UTC m=+1.089299403 container died 7f48ac99b93d3b5cc2a0a666b1aa2eeac433ec01eebe16b2f41255e6a68f5c37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_mirzakhani, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  4 05:16:41 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Dec  4 05:16:41 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Dec  4 05:16:41 np0005545273 systemd[1]: var-lib-containers-storage-overlay-0969cadf63d8fe9331e8c2a42c9a51c178a5699fa09f77c7e98209d584ee33bc-merged.mount: Deactivated successfully.
Dec  4 05:16:41 np0005545273 podman[95287]: 2025-12-04 10:16:41.619907911 +0000 UTC m=+1.141403461 container remove 7f48ac99b93d3b5cc2a0a666b1aa2eeac433ec01eebe16b2f41255e6a68f5c37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec  4 05:16:41 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/2551733214' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Dec  4 05:16:41 np0005545273 systemd[1]: libpod-conmon-7f48ac99b93d3b5cc2a0a666b1aa2eeac433ec01eebe16b2f41255e6a68f5c37.scope: Deactivated successfully.
Dec  4 05:16:41 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:16:41 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:41 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:16:41 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:41 np0005545273 ceph-mgr[75651]: [progress INFO root] update: starting ev bfdba998-7c4a-43bc-88b1-0d08e2109171 (Updating rgw.rgw deployment (+1 -> 1))
Dec  4 05:16:41 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.jnsliu", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec  4 05:16:41 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.jnsliu", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Dec  4 05:16:41 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.jnsliu", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  4 05:16:41 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec  4 05:16:41 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:41 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:16:41 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:16:41 np0005545273 ceph-mgr[75651]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.jnsliu on compute-0
Dec  4 05:16:41 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.jnsliu on compute-0
Dec  4 05:16:41 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Dec  4 05:16:41 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Dec  4 05:16:42 np0005545273 podman[95575]: 2025-12-04 10:16:42.333245722 +0000 UTC m=+0.051661878 container create 1755175d5f2d26582d2ffd1b9a9337721b159e5011ceff1709751b3a217be23c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec  4 05:16:42 np0005545273 systemd[1]: Started libpod-conmon-1755175d5f2d26582d2ffd1b9a9337721b159e5011ceff1709751b3a217be23c.scope.
Dec  4 05:16:42 np0005545273 podman[95575]: 2025-12-04 10:16:42.310482348 +0000 UTC m=+0.028898554 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:16:42 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:42 np0005545273 podman[95575]: 2025-12-04 10:16:42.442939222 +0000 UTC m=+0.161355388 container init 1755175d5f2d26582d2ffd1b9a9337721b159e5011ceff1709751b3a217be23c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True)
Dec  4 05:16:42 np0005545273 podman[95575]: 2025-12-04 10:16:42.450986188 +0000 UTC m=+0.169402354 container start 1755175d5f2d26582d2ffd1b9a9337721b159e5011ceff1709751b3a217be23c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bardeen, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec  4 05:16:42 np0005545273 podman[95575]: 2025-12-04 10:16:42.455070417 +0000 UTC m=+0.173486603 container attach 1755175d5f2d26582d2ffd1b9a9337721b159e5011ceff1709751b3a217be23c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bardeen, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:16:42 np0005545273 stoic_bardeen[95635]: 167 167
Dec  4 05:16:42 np0005545273 systemd[1]: libpod-1755175d5f2d26582d2ffd1b9a9337721b159e5011ceff1709751b3a217be23c.scope: Deactivated successfully.
Dec  4 05:16:42 np0005545273 podman[95575]: 2025-12-04 10:16:42.459993897 +0000 UTC m=+0.178410053 container died 1755175d5f2d26582d2ffd1b9a9337721b159e5011ceff1709751b3a217be23c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bardeen, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  4 05:16:42 np0005545273 systemd[1]: var-lib-containers-storage-overlay-a38778e9a0c361a4c79db6737c39aa353321e500e25bb58a9ae7afc1098ef38f-merged.mount: Deactivated successfully.
Dec  4 05:16:42 np0005545273 podman[95575]: 2025-12-04 10:16:42.501435586 +0000 UTC m=+0.219851762 container remove 1755175d5f2d26582d2ffd1b9a9337721b159e5011ceff1709751b3a217be23c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bardeen, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:16:42 np0005545273 systemd[1]: libpod-conmon-1755175d5f2d26582d2ffd1b9a9337721b159e5011ceff1709751b3a217be23c.scope: Deactivated successfully.
Dec  4 05:16:42 np0005545273 systemd[1]: Reloading.
Dec  4 05:16:42 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:16:42 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:16:42 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Dec  4 05:16:42 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Dec  4 05:16:42 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:42 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:42 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.jnsliu", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Dec  4 05:16:42 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.jnsliu", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  4 05:16:42 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:42 np0005545273 ceph-mon[75358]: Deploying daemon rgw.rgw.compute-0.jnsliu on compute-0
Dec  4 05:16:42 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v102: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:16:42 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  4 05:16:42 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec  4 05:16:42 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  4 05:16:42 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec  4 05:16:42 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  4 05:16:42 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec  4 05:16:42 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  4 05:16:42 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec  4 05:16:42 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  4 05:16:42 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec  4 05:16:42 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  4 05:16:42 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec  4 05:16:42 np0005545273 systemd[1]: Reloading.
Dec  4 05:16:42 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:16:42 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:16:42 np0005545273 ansible-async_wrapper.py[95742]: Invoked with j888336787719 30 /home/zuul/.ansible/tmp/ansible-tmp-1764843402.163865-36671-258255636267341/AnsiballZ_command.py _
Dec  4 05:16:42 np0005545273 ansible-async_wrapper.py[95783]: Starting module and watcher
Dec  4 05:16:42 np0005545273 ansible-async_wrapper.py[95783]: Start watching 95784 (30)
Dec  4 05:16:42 np0005545273 ansible-async_wrapper.py[95784]: Start module (95784)
Dec  4 05:16:42 np0005545273 ansible-async_wrapper.py[95742]: Return async_wrapper task started.
Dec  4 05:16:43 np0005545273 python3[95785]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:16:43 np0005545273 podman[95786]: 2025-12-04 10:16:43.32544913 +0000 UTC m=+0.050656434 container create 56812eb7ac05f83818c11f9cd3dc1f0eaeca035cff622b1826896d1c7c1648d7 (image=quay.io/ceph/ceph:v20, name=brave_yonath, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:16:43 np0005545273 systemd[1]: Starting Ceph rgw.rgw.compute-0.jnsliu for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d...
Dec  4 05:16:43 np0005545273 systemd[1]: Started libpod-conmon-56812eb7ac05f83818c11f9cd3dc1f0eaeca035cff622b1826896d1c7c1648d7.scope.
Dec  4 05:16:43 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:43 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/109f5a4a9476da841f628b0e61c43377b71ec17db419ee6570e3eef7640b422c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:43 np0005545273 podman[95786]: 2025-12-04 10:16:43.306248293 +0000 UTC m=+0.031455627 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:16:43 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/109f5a4a9476da841f628b0e61c43377b71ec17db419ee6570e3eef7640b422c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:43 np0005545273 podman[95786]: 2025-12-04 10:16:43.413334489 +0000 UTC m=+0.138541823 container init 56812eb7ac05f83818c11f9cd3dc1f0eaeca035cff622b1826896d1c7c1648d7 (image=quay.io/ceph/ceph:v20, name=brave_yonath, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:16:43 np0005545273 podman[95786]: 2025-12-04 10:16:43.422136693 +0000 UTC m=+0.147343997 container start 56812eb7ac05f83818c11f9cd3dc1f0eaeca035cff622b1826896d1c7c1648d7 (image=quay.io/ceph/ceph:v20, name=brave_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec  4 05:16:43 np0005545273 podman[95786]: 2025-12-04 10:16:43.426649834 +0000 UTC m=+0.151857138 container attach 56812eb7ac05f83818c11f9cd3dc1f0eaeca035cff622b1826896d1c7c1648d7 (image=quay.io/ceph/ceph:v20, name=brave_yonath, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle)
Dec  4 05:16:43 np0005545273 podman[95873]: 2025-12-04 10:16:43.61016399 +0000 UTC m=+0.039757179 container create 94b64ba6339c9da554f5008c9bb9b6e0be8079586ac8e31d0c89f9aeb8c67181 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-rgw-rgw-compute-0-jnsliu, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Dec  4 05:16:43 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26da74e6664be5dc3b7d8970ed8fd09024cb54994d8bb572e2fc490646def3dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:43 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26da74e6664be5dc3b7d8970ed8fd09024cb54994d8bb572e2fc490646def3dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:43 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26da74e6664be5dc3b7d8970ed8fd09024cb54994d8bb572e2fc490646def3dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:43 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26da74e6664be5dc3b7d8970ed8fd09024cb54994d8bb572e2fc490646def3dd/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.jnsliu supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Dec  4 05:16:43 np0005545273 podman[95873]: 2025-12-04 10:16:43.67221453 +0000 UTC m=+0.101807739 container init 94b64ba6339c9da554f5008c9bb9b6e0be8079586ac8e31d0c89f9aeb8c67181 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-rgw-rgw-compute-0-jnsliu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:16:43 np0005545273 podman[95873]: 2025-12-04 10:16:43.677873038 +0000 UTC m=+0.107466227 container start 94b64ba6339c9da554f5008c9bb9b6e0be8079586ac8e31d0c89f9aeb8c67181 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-rgw-rgw-compute-0-jnsliu, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  4 05:16:43 np0005545273 bash[95873]: 94b64ba6339c9da554f5008c9bb9b6e0be8079586ac8e31d0c89f9aeb8c67181
Dec  4 05:16:43 np0005545273 podman[95873]: 2025-12-04 10:16:43.59168809 +0000 UTC m=+0.021281309 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:16:43 np0005545273 systemd[1]: Started Ceph rgw.rgw.compute-0.jnsliu for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d.
Dec  4 05:16:43 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Dec  4 05:16:43 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec  4 05:16:43 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec  4 05:16:43 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec  4 05:16:43 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec  4 05:16:43 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec  4 05:16:43 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec  4 05:16:43 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  4 05:16:43 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  4 05:16:43 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  4 05:16:43 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  4 05:16:43 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  4 05:16:43 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  4 05:16:43 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Dec  4 05:16:43 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.18( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.233694077s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.785179138s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.1c( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590950966s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.142471313s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.18( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.233659744s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.785179138s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.13( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590942383s) [0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.142517090s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.1c( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590877533s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.142471313s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.13( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590904236s) [0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.142517090s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.543291092s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.403182983s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.15( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.543252945s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.403175354s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.543237686s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.403182983s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.15( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.543200493s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.403175354s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.16( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.233083725s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.784767151s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.16( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.233066559s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.784767151s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.15( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.232978821s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.784767151s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.11( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590917587s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.142707825s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.11( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590906143s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.142707825s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.15( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.232964516s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.784767151s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.12( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.232871056s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.784812927s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590806961s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.142761230s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.12( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.232855797s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.784812927s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.11( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.232735634s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.784713745s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590793610s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.142761230s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.11( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.232714653s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.784713745s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.f( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.232473373s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.784545898s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.f( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.232460976s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.784545898s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.232380867s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.784500122s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.232365608s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.784500122s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590691566s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.142860413s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590670586s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.142860413s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590633392s) [0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.142852783s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590619087s) [0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.142852783s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.232382774s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.784637451s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.232368469s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.784637451s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.17( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.232496262s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.784820557s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590587616s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.142921448s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590498924s) [0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.142868042s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590560913s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.142921448s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590484619s) [0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.142868042s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.17( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.232484818s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.784820557s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590525627s) [0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.143043518s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590511322s) [0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.143043518s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590304375s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.142936707s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590291023s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.142936707s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590256691s) [0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.142913818s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.231328011s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.784034729s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.590211868s) [0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.142913818s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.231087685s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.783843994s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.231075287s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.783843994s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.17( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550660133s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.412078857s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.14( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550289154s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.411735535s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.17( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550636292s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.412078857s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.14( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550270081s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.411735535s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550599098s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.412086487s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550566673s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.412086487s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550761223s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.412376404s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550747871s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.412376404s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.11( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550764084s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.412422180s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550706863s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.412414551s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.11( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550726891s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.412422180s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.231225967s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.784080505s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.230737686s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.783622742s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.231205940s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.784080505s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.230720520s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.783622742s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.594789505s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.147804260s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.230801582s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.783828735s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.230789185s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.783828735s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.594771385s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.147804260s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.594658852s) [0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.147827148s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.230663300s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.783843994s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.594639778s) [0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.147827148s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.230645180s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.783843994s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.594615936s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.147918701s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.230287552s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.783607483s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.594603539s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.147918701s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.230266571s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.783607483s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.594456673s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.147933960s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.594440460s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.147933960s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.230001450s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.783546448s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.594345093s) [0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.147933960s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.229974747s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.783546448s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.594331741s) [0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.147933960s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.594237328s) [0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.147956848s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.230120659s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.783843994s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.594224930s) [0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.147956848s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.230099678s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.783843994s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.231058121s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.784912109s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.594215393s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.148155212s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.594186783s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.148155212s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.229496956s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.783584595s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.229449272s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.783584595s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.594062805s) [0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.148300171s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.594047546s) [0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.148300171s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.224846840s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 86.779197693s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.224827766s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.779197693s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550687790s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.412414551s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550971985s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.412849426s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550957680s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.412849426s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550982475s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.412879944s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550964355s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.412879944s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.13( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550533295s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.412490845s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.13( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550518990s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.412490845s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550880432s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.412910461s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550874710s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.412918091s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550860405s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.412910461s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550854683s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.412918091s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550910950s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.412994385s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550898552s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.412994385s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.551048279s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.413200378s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.551035881s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.413200378s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550999641s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.413192749s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550988197s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.413192749s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550899506s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.413124084s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550881386s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.413124084s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.551022530s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.413330078s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.551007271s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.413330078s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.551032066s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.413459778s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.551012993s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.413459778s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.551015854s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.413459778s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550999641s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.413459778s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550935745s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.413505554s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550924301s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.413505554s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550952911s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.413574219s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550939560s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.413574219s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.6( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550898552s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.413589478s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.6( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550886154s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.413589478s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550847054s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.413597107s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550811768s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.413589478s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550799370s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.413589478s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550827026s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.413597107s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550859451s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.413681030s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550596237s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.413414001s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550847054s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.413681030s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550754547s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.413658142s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550552368s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.413414001s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550743103s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.413658142s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.552104950s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.415061951s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550853729s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.413825989s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.552092552s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.415061951s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550838470s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.413825989s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.552052498s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.415069580s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.552037239s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.415069580s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.4( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.551926613s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.415077209s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.4( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.551891327s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.415077209s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[7.13( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[3.15( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[3.12( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[3.f( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[7.9( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[3.c( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[7.f( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550276756s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.415191650s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550256729s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.415191650s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550207138s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.415283203s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550189018s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.415283203s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.1e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550127983s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.415351868s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.1e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550107002s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.415351868s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550031662s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.415351868s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.550016403s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.415351868s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.230355263s) [2] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.784912109s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.593091011s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 active pruub 84.147735596s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=42/43 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=9.593063354s) [2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 unknown NOTIFY pruub 84.147735596s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=37/39 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44 pruub=12.231313705s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 86.784034729s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[6.14( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[6.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[6.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[6.13( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.549925804s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.415359497s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.549912453s) [2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.415359497s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.1c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.549885750s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.415435791s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.1c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.549877167s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.415435791s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.1d( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.549964905s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 95.415626526s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[6.1d( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.549933434s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 95.415626526s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[6.f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.097001076s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066772461s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.096982956s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066772461s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.530227661s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.502059937s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.530211449s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.502059937s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[3.17( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[7.6( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[7.4( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[3.1( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[7.3( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[3.9( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[3.a( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[7.1f( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[7.18( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[3.1b( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[7.1b( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[3.1f( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[3.3( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[6.17( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[6.d( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[6.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[6.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[6.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[6.6( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[6.b( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[6.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[6.4( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[6.1e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[6.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=0/0 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[6.1d( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.078037262s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066703796s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.078000069s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066703796s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.077732086s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066741943s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.077715874s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066741943s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.077480316s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066696167s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.077457428s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066696167s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.077162743s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066627502s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.077148438s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066627502s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.512312889s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.501876831s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.512302399s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.501876831s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.076994896s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066650391s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.076984406s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066650391s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.512131691s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.501861572s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.512122154s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.501861572s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.512247086s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.502059937s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.512236595s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.502059937s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.076711655s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066612244s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.076702118s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066612244s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.512184143s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.502159119s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.512175560s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.502159119s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.511563301s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.502067566s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.511548996s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.502067566s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.075968742s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066665649s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.075948715s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066665649s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.511302948s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.502204895s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.511286736s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.502204895s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.514804840s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.502189636s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.511129379s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.502189636s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.075341225s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066543579s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.075322151s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066543579s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.074984550s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066413879s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.074939728s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066413879s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.074461937s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066513062s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.510314941s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.502410889s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.510289192s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.502410889s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.510479927s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.502388000s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.074448586s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066513062s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.510172844s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.502388000s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.509974480s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.502372742s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.509960175s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.502372742s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.074248314s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066787720s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.074235916s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066787720s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.073794365s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066368103s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.509723663s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.502380371s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.509709358s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.502380371s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.073698044s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066368103s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.073497772s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066261292s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.073486328s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066261292s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.509485245s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.502334595s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.509465218s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.502334595s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.509727478s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.502601624s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.509716034s) [0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.502601624s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.073373795s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066360474s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.073348045s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066360474s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.073848724s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066993713s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.509265900s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.502418518s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.073454857s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066619873s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.073829651s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066993713s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.509248734s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.502418518s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.073436737s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066619873s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.509227753s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.502494812s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.072925568s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066230774s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.509199142s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.502494812s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.072909355s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066230774s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.072793961s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066223145s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.072651863s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066223145s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.072630882s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066223145s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.072689056s) [1] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066223145s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.072225571s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066314697s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[2.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[5.1d( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[2.17( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[5.11( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.072202682s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066314697s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.071744919s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066108704s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.071716309s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066108704s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[2.15( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[5.12( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.508111000s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.502616882s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.508092880s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.502616882s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.508024216s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.502563477s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.508002281s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.502563477s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.507984161s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.502647400s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.507963181s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.502647400s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.071427345s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 active pruub 78.066177368s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44 pruub=12.071377754s) [0] r=-1 lpr=44 pi=[37,44)/1 crt=0'0 unknown NOTIFY pruub 78.066177368s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[5.13( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[5.16( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[2.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[2.19( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[5.9( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[6.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[6.1f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[2.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[2.16( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[2.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[5.14( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.503145218s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 active pruub 80.502670288s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:16:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=14.502871513s) [1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 unknown NOTIFY pruub 80.502670288s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[5.15( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[2.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[5.1e( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[2.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[5.7( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[5.4( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[5.3( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[5.2( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[5.5( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[2.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[2.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[2.b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[2.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[2.1d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 44 pg[2.1f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:43 np0005545273 radosgw[95892]: deferred set uid:gid to 167:167 (ceph:ceph)
Dec  4 05:16:43 np0005545273 radosgw[95892]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process radosgw, pid 2
Dec  4 05:16:43 np0005545273 radosgw[95892]: framework: beast
Dec  4 05:16:43 np0005545273 radosgw[95892]: framework conf key: endpoint, val: 192.168.122.100:8082
Dec  4 05:16:43 np0005545273 radosgw[95892]: init_numa not setting numa affinity
Dec  4 05:16:43 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:43 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec  4 05:16:43 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:43 np0005545273 ceph-mgr[75651]: [progress INFO root] complete: finished ev bfdba998-7c4a-43bc-88b1-0d08e2109171 (Updating rgw.rgw deployment (+1 -> 1))
Dec  4 05:16:43 np0005545273 ceph-mgr[75651]: [progress INFO root] Completed event bfdba998-7c4a-43bc-88b1-0d08e2109171 (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Dec  4 05:16:43 np0005545273 ceph-mgr[75651]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Dec  4 05:16:43 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Dec  4 05:16:43 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec  4 05:16:43 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:43 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec  4 05:16:43 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:43 np0005545273 ceph-mgr[75651]: [progress INFO root] update: starting ev 658f0ade-7039-4f35-9ab8-5a45187848c0 (Updating mds.cephfs deployment (+1 -> 1))
Dec  4 05:16:43 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zcbnoq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec  4 05:16:43 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zcbnoq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec  4 05:16:43 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zcbnoq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  4 05:16:43 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:16:43 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:16:43 np0005545273 ceph-mgr[75651]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.zcbnoq on compute-0
Dec  4 05:16:43 np0005545273 ceph-mgr[75651]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.zcbnoq on compute-0
Dec  4 05:16:43 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14250 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  4 05:16:43 np0005545273 brave_yonath[95805]: 
Dec  4 05:16:43 np0005545273 brave_yonath[95805]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  4 05:16:43 np0005545273 systemd[1]: libpod-56812eb7ac05f83818c11f9cd3dc1f0eaeca035cff622b1826896d1c7c1648d7.scope: Deactivated successfully.
Dec  4 05:16:43 np0005545273 podman[95786]: 2025-12-04 10:16:43.866262022 +0000 UTC m=+0.591469326 container died 56812eb7ac05f83818c11f9cd3dc1f0eaeca035cff622b1826896d1c7c1648d7 (image=quay.io/ceph/ceph:v20, name=brave_yonath, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:16:43 np0005545273 systemd[1]: var-lib-containers-storage-overlay-109f5a4a9476da841f628b0e61c43377b71ec17db419ee6570e3eef7640b422c-merged.mount: Deactivated successfully.
Dec  4 05:16:43 np0005545273 podman[95786]: 2025-12-04 10:16:43.913256366 +0000 UTC m=+0.638463670 container remove 56812eb7ac05f83818c11f9cd3dc1f0eaeca035cff622b1826896d1c7c1648d7 (image=quay.io/ceph/ceph:v20, name=brave_yonath, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Dec  4 05:16:43 np0005545273 systemd[1]: libpod-conmon-56812eb7ac05f83818c11f9cd3dc1f0eaeca035cff622b1826896d1c7c1648d7.scope: Deactivated successfully.
Dec  4 05:16:43 np0005545273 ansible-async_wrapper.py[95784]: Module complete (95784)
Dec  4 05:16:44 np0005545273 podman[96070]: 2025-12-04 10:16:44.344236796 +0000 UTC m=+0.048193005 container create c8e37ef675bfcfdf9686fb777276dbd9acd5f861d3cbc19e5e32128f7a86524b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec  4 05:16:44 np0005545273 systemd[1]: Started libpod-conmon-c8e37ef675bfcfdf9686fb777276dbd9acd5f861d3cbc19e5e32128f7a86524b.scope.
Dec  4 05:16:44 np0005545273 podman[96070]: 2025-12-04 10:16:44.319502063 +0000 UTC m=+0.023458302 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:16:44 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:44 np0005545273 podman[96070]: 2025-12-04 10:16:44.43807798 +0000 UTC m=+0.142034279 container init c8e37ef675bfcfdf9686fb777276dbd9acd5f861d3cbc19e5e32128f7a86524b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:16:44 np0005545273 python3[96079]: ansible-ansible.legacy.async_status Invoked with jid=j888336787719.95742 mode=status _async_dir=/root/.ansible_async
Dec  4 05:16:44 np0005545273 podman[96070]: 2025-12-04 10:16:44.446631628 +0000 UTC m=+0.150587877 container start c8e37ef675bfcfdf9686fb777276dbd9acd5f861d3cbc19e5e32128f7a86524b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  4 05:16:44 np0005545273 podman[96070]: 2025-12-04 10:16:44.450861661 +0000 UTC m=+0.154817900 container attach c8e37ef675bfcfdf9686fb777276dbd9acd5f861d3cbc19e5e32128f7a86524b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_proskuriakova, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec  4 05:16:44 np0005545273 modest_proskuriakova[96089]: 167 167
Dec  4 05:16:44 np0005545273 systemd[1]: libpod-c8e37ef675bfcfdf9686fb777276dbd9acd5f861d3cbc19e5e32128f7a86524b.scope: Deactivated successfully.
Dec  4 05:16:44 np0005545273 podman[96070]: 2025-12-04 10:16:44.457766449 +0000 UTC m=+0.161722688 container died c8e37ef675bfcfdf9686fb777276dbd9acd5f861d3cbc19e5e32128f7a86524b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  4 05:16:44 np0005545273 systemd[1]: var-lib-containers-storage-overlay-83ca26111969d38a78fb7c3b7f3683ba3c2469629cb22f48c67999a23adc7bc5-merged.mount: Deactivated successfully.
Dec  4 05:16:44 np0005545273 podman[96070]: 2025-12-04 10:16:44.517944113 +0000 UTC m=+0.221900362 container remove c8e37ef675bfcfdf9686fb777276dbd9acd5f861d3cbc19e5e32128f7a86524b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:16:44 np0005545273 systemd[1]: libpod-conmon-c8e37ef675bfcfdf9686fb777276dbd9acd5f861d3cbc19e5e32128f7a86524b.scope: Deactivated successfully.
Dec  4 05:16:44 np0005545273 systemd[1]: Reloading.
Dec  4 05:16:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:16:44 np0005545273 ceph-mgr[75651]: [progress INFO root] Writing back 10 completed events
Dec  4 05:16:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  4 05:16:44 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:44 np0005545273 ceph-mgr[75651]: [progress INFO root] Completed event e8fbb843-ac01-485d-b1b9-727e8a8c205a (Global Recovery Event) in 12 seconds
Dec  4 05:16:44 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Dec  4 05:16:44 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Dec  4 05:16:44 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v104: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:16:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Dec  4 05:16:44 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  4 05:16:44 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  4 05:16:44 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  4 05:16:44 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  4 05:16:44 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  4 05:16:44 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  4 05:16:44 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:44 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:44 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:44 np0005545273 ceph-mon[75358]: Saving service rgw.rgw spec with placement compute-0
Dec  4 05:16:44 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:44 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:44 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zcbnoq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec  4 05:16:44 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zcbnoq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  4 05:16:44 np0005545273 ceph-mon[75358]: Deploying daemon mds.cephfs.compute-0.zcbnoq on compute-0
Dec  4 05:16:44 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Dec  4 05:16:44 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Dec  4 05:16:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Dec  4 05:16:44 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2328690103' entity='client.rgw.rgw.compute-0.jnsliu' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[8.0( empty local-lis/les=0/0 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [1] r=0 lpr=45 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[6.1f( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[6.13( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[6.15( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[6.14( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[6.8( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[6.11( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[6.f( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [2] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[6.1c( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[6.1d( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[6.17( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[6.e( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[6.b( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[6.1( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[6.4( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[6.6( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[6.d( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[6.2( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[6.c( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=44/45 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[6.1e( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[2.19( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[2.18( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[5.7( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[5.4( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[2.1d( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[2.1c( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[2.f( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[2.2( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[2.1f( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[5.2( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[2.b( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[2.8( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[2.16( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[5.15( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[2.11( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[3.15( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[3.a( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[3.9( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[3.3( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[3.c( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[3.f( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[7.1f( empty local-lis/les=44/45 n=0 ec=42/27 lis/c=42/42 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=44/45 n=0 ec=37/21 lis/c=37/37 les/c/f=39/39/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=44/45 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=44) [1] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:44 np0005545273 systemd[1]: Reloading.
Dec  4 05:16:44 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:16:44 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:16:45 np0005545273 python3[96191]: ansible-ansible.legacy.async_status Invoked with jid=j888336787719.95742 mode=cleanup _async_dir=/root/.ansible_async
Dec  4 05:16:45 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Dec  4 05:16:45 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Dec  4 05:16:45 np0005545273 systemd[1]: Starting Ceph mds.cephfs.compute-0.zcbnoq for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d...
Dec  4 05:16:45 np0005545273 podman[96277]: 2025-12-04 10:16:45.407290228 +0000 UTC m=+0.049469585 container create 8653c026f7d4e01391a33ebd4fc0a5ae26a89370484767be6f2c06ca6b15142b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mds-cephfs-compute-0-zcbnoq, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:16:45 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37bdc90da99aae5c4a1ef34ef8720cef0ab08c898b58f8fcf94e302311081ca7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:45 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37bdc90da99aae5c4a1ef34ef8720cef0ab08c898b58f8fcf94e302311081ca7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:45 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37bdc90da99aae5c4a1ef34ef8720cef0ab08c898b58f8fcf94e302311081ca7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:45 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37bdc90da99aae5c4a1ef34ef8720cef0ab08c898b58f8fcf94e302311081ca7/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.zcbnoq supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:45 np0005545273 podman[96277]: 2025-12-04 10:16:45.384797181 +0000 UTC m=+0.026976508 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:16:45 np0005545273 podman[96277]: 2025-12-04 10:16:45.480437938 +0000 UTC m=+0.122617275 container init 8653c026f7d4e01391a33ebd4fc0a5ae26a89370484767be6f2c06ca6b15142b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mds-cephfs-compute-0-zcbnoq, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec  4 05:16:45 np0005545273 podman[96277]: 2025-12-04 10:16:45.48622948 +0000 UTC m=+0.128408797 container start 8653c026f7d4e01391a33ebd4fc0a5ae26a89370484767be6f2c06ca6b15142b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mds-cephfs-compute-0-zcbnoq, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:16:45 np0005545273 bash[96277]: 8653c026f7d4e01391a33ebd4fc0a5ae26a89370484767be6f2c06ca6b15142b
Dec  4 05:16:45 np0005545273 systemd[1]: Started Ceph mds.cephfs.compute-0.zcbnoq for f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d.
Dec  4 05:16:45 np0005545273 ceph-mds[96299]: set uid:gid to 167:167 (ceph:ceph)
Dec  4 05:16:45 np0005545273 ceph-mds[96299]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mds, pid 2
Dec  4 05:16:45 np0005545273 ceph-mds[96299]: main not setting numa affinity
Dec  4 05:16:45 np0005545273 ceph-mds[96299]: pidfile_write: ignore empty --pid-file
Dec  4 05:16:45 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mds-cephfs-compute-0-zcbnoq[96293]: starting mds.cephfs.compute-0.zcbnoq at 
Dec  4 05:16:45 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq Updating MDS map to version 2 from mon.0
Dec  4 05:16:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:16:45 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:16:45 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec  4 05:16:45 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:45 np0005545273 ceph-mgr[75651]: [progress INFO root] complete: finished ev 658f0ade-7039-4f35-9ab8-5a45187848c0 (Updating mds.cephfs deployment (+1 -> 1))
Dec  4 05:16:45 np0005545273 ceph-mgr[75651]: [progress INFO root] Completed event 658f0ade-7039-4f35-9ab8-5a45187848c0 (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Dec  4 05:16:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Dec  4 05:16:45 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec  4 05:16:45 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:45 np0005545273 python3[96341]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:16:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Dec  4 05:16:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).mds e3 new map
Dec  4 05:16:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).mds e3 print_map#012e3#012btime 2025-12-04T10:16:45:747724+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-04T10:16:31.947313+0000#012modified#0112025-12-04T10:16:31.947313+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.zcbnoq{-1:14255} state up:standby seq 1 addr [v2:192.168.122.100:6814/26287701,v1:192.168.122.100:6815/26287701] compat {c=[1],r=[1],i=[1fff]}]
Dec  4 05:16:45 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq Updating MDS map to version 3 from mon.0
Dec  4 05:16:45 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq Monitors have assigned me to become a standby
Dec  4 05:16:45 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/26287701,v1:192.168.122.100:6815/26287701] up:boot
Dec  4 05:16:45 np0005545273 podman[96370]: 2025-12-04 10:16:45.755029471 +0000 UTC m=+0.044311169 container create dd42617806df109dd6308754b38e7a09e3390dfef1bcdca0f36d1fe7016d30d7 (image=quay.io/ceph/ceph:v20, name=admiring_lamport, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:16:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/26287701,v1:192.168.122.100:6815/26287701] as mds.0
Dec  4 05:16:45 np0005545273 ceph-mon[75358]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.zcbnoq assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec  4 05:16:45 np0005545273 ceph-mon[75358]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec  4 05:16:45 np0005545273 ceph-mon[75358]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec  4 05:16:45 np0005545273 ceph-mon[75358]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec  4 05:16:45 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Dec  4 05:16:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.zcbnoq"} v 0)
Dec  4 05:16:45 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.zcbnoq"} : dispatch
Dec  4 05:16:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).mds e3 all = 0
Dec  4 05:16:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).mds e4 new map
Dec  4 05:16:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).mds e4 print_map#012e4#012btime 2025-12-04T10:16:45:755624+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-04T10:16:31.947313+0000#012modified#0112025-12-04T10:16:45.755617+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14255}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-0.zcbnoq{0:14255} state up:creating seq 1 addr [v2:192.168.122.100:6814/26287701,v1:192.168.122.100:6815/26287701] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Dec  4 05:16:45 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2328690103' entity='client.rgw.rgw.compute-0.jnsliu' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec  4 05:16:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Dec  4 05:16:45 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/2328690103' entity='client.rgw.rgw.compute-0.jnsliu' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Dec  4 05:16:45 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:45 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:45 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:45 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:45 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:45 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq Updating MDS map to version 4 from mon.0
Dec  4 05:16:45 np0005545273 ceph-mds[96299]: mds.0.4 handle_mds_map I am now mds.0.4
Dec  4 05:16:45 np0005545273 ceph-mds[96299]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Dec  4 05:16:45 np0005545273 ceph-mds[96299]: mds.0.cache creating system inode with ino:0x1
Dec  4 05:16:45 np0005545273 ceph-mds[96299]: mds.0.cache creating system inode with ino:0x100
Dec  4 05:16:45 np0005545273 ceph-mds[96299]: mds.0.cache creating system inode with ino:0x600
Dec  4 05:16:45 np0005545273 ceph-mds[96299]: mds.0.cache creating system inode with ino:0x601
Dec  4 05:16:45 np0005545273 ceph-mds[96299]: mds.0.cache creating system inode with ino:0x602
Dec  4 05:16:45 np0005545273 ceph-mds[96299]: mds.0.cache creating system inode with ino:0x603
Dec  4 05:16:45 np0005545273 ceph-mds[96299]: mds.0.cache creating system inode with ino:0x604
Dec  4 05:16:45 np0005545273 ceph-mds[96299]: mds.0.cache creating system inode with ino:0x605
Dec  4 05:16:45 np0005545273 ceph-mds[96299]: mds.0.cache creating system inode with ino:0x606
Dec  4 05:16:45 np0005545273 ceph-mds[96299]: mds.0.cache creating system inode with ino:0x607
Dec  4 05:16:45 np0005545273 ceph-mds[96299]: mds.0.cache creating system inode with ino:0x608
Dec  4 05:16:45 np0005545273 ceph-mds[96299]: mds.0.cache creating system inode with ino:0x609
Dec  4 05:16:45 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Dec  4 05:16:45 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.zcbnoq=up:creating}
Dec  4 05:16:45 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 46 pg[8.0( empty local-lis/les=45/46 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [1] r=0 lpr=45 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:45 np0005545273 ceph-mds[96299]: mds.0.4 creating_done
Dec  4 05:16:45 np0005545273 ceph-mon[75358]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.zcbnoq is now active in filesystem cephfs as rank 0
Dec  4 05:16:45 np0005545273 systemd[1]: Started libpod-conmon-dd42617806df109dd6308754b38e7a09e3390dfef1bcdca0f36d1fe7016d30d7.scope.
Dec  4 05:16:45 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:45 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/991d219d9600e15604264523f2653c63091a6ba250908f051c562a5eafa4c9ca/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:45 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/991d219d9600e15604264523f2653c63091a6ba250908f051c562a5eafa4c9ca/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:45 np0005545273 podman[96370]: 2025-12-04 10:16:45.73772138 +0000 UTC m=+0.027003108 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:16:45 np0005545273 podman[96370]: 2025-12-04 10:16:45.84291888 +0000 UTC m=+0.132200608 container init dd42617806df109dd6308754b38e7a09e3390dfef1bcdca0f36d1fe7016d30d7 (image=quay.io/ceph/ceph:v20, name=admiring_lamport, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:16:45 np0005545273 podman[96370]: 2025-12-04 10:16:45.850521066 +0000 UTC m=+0.139802764 container start dd42617806df109dd6308754b38e7a09e3390dfef1bcdca0f36d1fe7016d30d7 (image=quay.io/ceph/ceph:v20, name=admiring_lamport, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec  4 05:16:45 np0005545273 podman[96370]: 2025-12-04 10:16:45.854114113 +0000 UTC m=+0.143395831 container attach dd42617806df109dd6308754b38e7a09e3390dfef1bcdca0f36d1fe7016d30d7 (image=quay.io/ceph/ceph:v20, name=admiring_lamport, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:16:46 np0005545273 podman[97071]: 2025-12-04 10:16:46.209207135 +0000 UTC m=+0.047934737 container exec 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec  4 05:16:46 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  4 05:16:46 np0005545273 admiring_lamport[96489]: 
Dec  4 05:16:46 np0005545273 admiring_lamport[96489]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  4 05:16:46 np0005545273 systemd[1]: libpod-dd42617806df109dd6308754b38e7a09e3390dfef1bcdca0f36d1fe7016d30d7.scope: Deactivated successfully.
Dec  4 05:16:46 np0005545273 podman[96370]: 2025-12-04 10:16:46.278519162 +0000 UTC m=+0.567800860 container died dd42617806df109dd6308754b38e7a09e3390dfef1bcdca0f36d1fe7016d30d7 (image=quay.io/ceph/ceph:v20, name=admiring_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:16:46 np0005545273 systemd[1]: var-lib-containers-storage-overlay-991d219d9600e15604264523f2653c63091a6ba250908f051c562a5eafa4c9ca-merged.mount: Deactivated successfully.
Dec  4 05:16:46 np0005545273 podman[96370]: 2025-12-04 10:16:46.32235928 +0000 UTC m=+0.611640978 container remove dd42617806df109dd6308754b38e7a09e3390dfef1bcdca0f36d1fe7016d30d7 (image=quay.io/ceph/ceph:v20, name=admiring_lamport, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:16:46 np0005545273 podman[97071]: 2025-12-04 10:16:46.327184447 +0000 UTC m=+0.165912069 container exec_died 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:16:46 np0005545273 systemd[1]: libpod-conmon-dd42617806df109dd6308754b38e7a09e3390dfef1bcdca0f36d1fe7016d30d7.scope: Deactivated successfully.
Dec  4 05:16:46 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Dec  4 05:16:46 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Dec  4 05:16:46 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v107: 194 pgs: 1 unknown, 193 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s wr, 2 op/s
Dec  4 05:16:46 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Dec  4 05:16:46 np0005545273 ceph-mon[75358]: daemon mds.cephfs.compute-0.zcbnoq assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec  4 05:16:46 np0005545273 ceph-mon[75358]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec  4 05:16:46 np0005545273 ceph-mon[75358]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec  4 05:16:46 np0005545273 ceph-mon[75358]: Cluster is now healthy
Dec  4 05:16:46 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/2328690103' entity='client.rgw.rgw.compute-0.jnsliu' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec  4 05:16:46 np0005545273 ceph-mon[75358]: daemon mds.cephfs.compute-0.zcbnoq is now active in filesystem cephfs as rank 0
Dec  4 05:16:46 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).mds e5 new map
Dec  4 05:16:46 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).mds e5 print_map#012e5#012btime 2025-12-04T10:16:46:764153+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-04T10:16:31.947313+0000#012modified#0112025-12-04T10:16:46.764151+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14255}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 14255 members: 14255#012[mds.cephfs.compute-0.zcbnoq{0:14255} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/26287701,v1:192.168.122.100:6815/26287701] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Dec  4 05:16:46 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Dec  4 05:16:46 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq Updating MDS map to version 5 from mon.0
Dec  4 05:16:46 np0005545273 ceph-mds[96299]: mds.0.4 handle_mds_map I am now mds.0.4
Dec  4 05:16:46 np0005545273 ceph-mds[96299]: mds.0.4 handle_mds_map state change up:creating --> up:active
Dec  4 05:16:46 np0005545273 ceph-mds[96299]: mds.0.4 recovery_done -- successful recovery!
Dec  4 05:16:46 np0005545273 ceph-mds[96299]: mds.0.4 active_start
Dec  4 05:16:46 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/26287701,v1:192.168.122.100:6815/26287701] up:active
Dec  4 05:16:46 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.zcbnoq=up:active}
Dec  4 05:16:46 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Dec  4 05:16:46 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Dec  4 05:16:46 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3790395680' entity='client.rgw.rgw.compute-0.jnsliu' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Dec  4 05:16:46 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 47 pg[9.0( empty local-lis/les=0/0 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [1] r=0 lpr=47 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:46 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:16:47 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:47 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:16:47 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:47 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:16:47 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:16:47 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:16:47 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:16:47 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:16:47 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:47 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:16:47 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:16:47 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:16:47 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:16:47 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:16:47 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:16:47 np0005545273 python3[97320]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:16:47 np0005545273 podman[97348]: 2025-12-04 10:16:47.259865266 +0000 UTC m=+0.040631560 container create 39e1be52615e5adeecc58057d865bd5c439904ecd678d1f21b649d3cc88a5ae3 (image=quay.io/ceph/ceph:v20, name=funny_leakey, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  4 05:16:47 np0005545273 systemd[1]: Started libpod-conmon-39e1be52615e5adeecc58057d865bd5c439904ecd678d1f21b649d3cc88a5ae3.scope.
Dec  4 05:16:47 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:47 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90c443ff4ba18cc4798da2ea3f0aa9c45d52eea1d75a5cce9d5e7e1485f6bddf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:47 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90c443ff4ba18cc4798da2ea3f0aa9c45d52eea1d75a5cce9d5e7e1485f6bddf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:47 np0005545273 podman[97348]: 2025-12-04 10:16:47.333818536 +0000 UTC m=+0.114584860 container init 39e1be52615e5adeecc58057d865bd5c439904ecd678d1f21b649d3cc88a5ae3 (image=quay.io/ceph/ceph:v20, name=funny_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  4 05:16:47 np0005545273 podman[97348]: 2025-12-04 10:16:47.242562275 +0000 UTC m=+0.023328599 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:16:47 np0005545273 podman[97348]: 2025-12-04 10:16:47.342248831 +0000 UTC m=+0.123015125 container start 39e1be52615e5adeecc58057d865bd5c439904ecd678d1f21b649d3cc88a5ae3 (image=quay.io/ceph/ceph:v20, name=funny_leakey, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:16:47 np0005545273 podman[97348]: 2025-12-04 10:16:47.347559861 +0000 UTC m=+0.128326285 container attach 39e1be52615e5adeecc58057d865bd5c439904ecd678d1f21b649d3cc88a5ae3 (image=quay.io/ceph/ceph:v20, name=funny_leakey, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:16:47 np0005545273 podman[97380]: 2025-12-04 10:16:47.458594353 +0000 UTC m=+0.049536307 container create db9a7e7378285523145483f029278998d24a6330e062f06c998b4de2dabe4e29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_heisenberg, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:16:47 np0005545273 systemd[1]: Started libpod-conmon-db9a7e7378285523145483f029278998d24a6330e062f06c998b4de2dabe4e29.scope.
Dec  4 05:16:47 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:47 np0005545273 podman[97380]: 2025-12-04 10:16:47.439380755 +0000 UTC m=+0.030322739 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:16:47 np0005545273 podman[97380]: 2025-12-04 10:16:47.532897701 +0000 UTC m=+0.123839675 container init db9a7e7378285523145483f029278998d24a6330e062f06c998b4de2dabe4e29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_heisenberg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:16:47 np0005545273 podman[97380]: 2025-12-04 10:16:47.53941869 +0000 UTC m=+0.130360654 container start db9a7e7378285523145483f029278998d24a6330e062f06c998b4de2dabe4e29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  4 05:16:47 np0005545273 admiring_heisenberg[97415]: 167 167
Dec  4 05:16:47 np0005545273 podman[97380]: 2025-12-04 10:16:47.543369306 +0000 UTC m=+0.134311280 container attach db9a7e7378285523145483f029278998d24a6330e062f06c998b4de2dabe4e29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_heisenberg, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec  4 05:16:47 np0005545273 systemd[1]: libpod-db9a7e7378285523145483f029278998d24a6330e062f06c998b4de2dabe4e29.scope: Deactivated successfully.
Dec  4 05:16:47 np0005545273 conmon[97415]: conmon db9a7e73782855231454 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-db9a7e7378285523145483f029278998d24a6330e062f06c998b4de2dabe4e29.scope/container/memory.events
Dec  4 05:16:47 np0005545273 podman[97380]: 2025-12-04 10:16:47.545442866 +0000 UTC m=+0.136384830 container died db9a7e7378285523145483f029278998d24a6330e062f06c998b4de2dabe4e29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_heisenberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec  4 05:16:47 np0005545273 systemd[1]: var-lib-containers-storage-overlay-904148f26eb140fb51d9e249af113099c5972c942f345fda103dbf8239fff853-merged.mount: Deactivated successfully.
Dec  4 05:16:47 np0005545273 podman[97380]: 2025-12-04 10:16:47.588916134 +0000 UTC m=+0.179858088 container remove db9a7e7378285523145483f029278998d24a6330e062f06c998b4de2dabe4e29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_heisenberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:16:47 np0005545273 systemd[1]: libpod-conmon-db9a7e7378285523145483f029278998d24a6330e062f06c998b4de2dabe4e29.scope: Deactivated successfully.
Dec  4 05:16:47 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  4 05:16:47 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.jnsliu", "name": "rgw_frontends"} v 0)
Dec  4 05:16:47 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.jnsliu", "name": "rgw_frontends"} : dispatch
Dec  4 05:16:47 np0005545273 funny_leakey[97363]: 
Dec  4 05:16:47 np0005545273 funny_leakey[97363]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_exit_timeout_secs": 120, "rgw_frontend_port": 8082}}]
Dec  4 05:16:47 np0005545273 systemd[1]: libpod-39e1be52615e5adeecc58057d865bd5c439904ecd678d1f21b649d3cc88a5ae3.scope: Deactivated successfully.
Dec  4 05:16:47 np0005545273 podman[97348]: 2025-12-04 10:16:47.766279001 +0000 UTC m=+0.547045295 container died 39e1be52615e5adeecc58057d865bd5c439904ecd678d1f21b649d3cc88a5ae3 (image=quay.io/ceph/ceph:v20, name=funny_leakey, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:16:47 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Dec  4 05:16:47 np0005545273 podman[97438]: 2025-12-04 10:16:47.781773908 +0000 UTC m=+0.058525015 container create 84a7054aa989a38fb20965a16a71f5c3a4e9483ab7d2ca790e4899100dcd0a77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_poincare, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec  4 05:16:47 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3790395680' entity='client.rgw.rgw.compute-0.jnsliu' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec  4 05:16:47 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Dec  4 05:16:47 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Dec  4 05:16:47 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[9.0( empty local-lis/les=47/48 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [1] r=0 lpr=47 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:47 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/3790395680' entity='client.rgw.rgw.compute-0.jnsliu' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Dec  4 05:16:47 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:47 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:47 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:16:47 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:47 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:16:47 np0005545273 systemd[1]: var-lib-containers-storage-overlay-90c443ff4ba18cc4798da2ea3f0aa9c45d52eea1d75a5cce9d5e7e1485f6bddf-merged.mount: Deactivated successfully.
Dec  4 05:16:47 np0005545273 podman[97348]: 2025-12-04 10:16:47.829896109 +0000 UTC m=+0.610662403 container remove 39e1be52615e5adeecc58057d865bd5c439904ecd678d1f21b649d3cc88a5ae3 (image=quay.io/ceph/ceph:v20, name=funny_leakey, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec  4 05:16:47 np0005545273 systemd[1]: Started libpod-conmon-84a7054aa989a38fb20965a16a71f5c3a4e9483ab7d2ca790e4899100dcd0a77.scope.
Dec  4 05:16:47 np0005545273 systemd[1]: libpod-conmon-39e1be52615e5adeecc58057d865bd5c439904ecd678d1f21b649d3cc88a5ae3.scope: Deactivated successfully.
Dec  4 05:16:47 np0005545273 podman[97438]: 2025-12-04 10:16:47.75512181 +0000 UTC m=+0.031872937 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:16:47 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:47 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c9656976207c12447ac36c5ebf3a3dd8cac4de5647875de0a83c358ed23c4ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:47 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c9656976207c12447ac36c5ebf3a3dd8cac4de5647875de0a83c358ed23c4ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:47 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c9656976207c12447ac36c5ebf3a3dd8cac4de5647875de0a83c358ed23c4ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:47 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c9656976207c12447ac36c5ebf3a3dd8cac4de5647875de0a83c358ed23c4ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:47 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c9656976207c12447ac36c5ebf3a3dd8cac4de5647875de0a83c358ed23c4ca/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:47 np0005545273 podman[97438]: 2025-12-04 10:16:47.903261975 +0000 UTC m=+0.180013112 container init 84a7054aa989a38fb20965a16a71f5c3a4e9483ab7d2ca790e4899100dcd0a77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec  4 05:16:47 np0005545273 podman[97438]: 2025-12-04 10:16:47.912351106 +0000 UTC m=+0.189102213 container start 84a7054aa989a38fb20965a16a71f5c3a4e9483ab7d2ca790e4899100dcd0a77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Dec  4 05:16:47 np0005545273 podman[97438]: 2025-12-04 10:16:47.915533024 +0000 UTC m=+0.192284331 container attach 84a7054aa989a38fb20965a16a71f5c3a4e9483ab7d2ca790e4899100dcd0a77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  4 05:16:47 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Dec  4 05:16:47 np0005545273 ansible-async_wrapper.py[95783]: Done in kid B.
Dec  4 05:16:47 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Dec  4 05:16:48 np0005545273 kind_poincare[97473]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:16:48 np0005545273 kind_poincare[97473]: --> All data devices are unavailable
Dec  4 05:16:48 np0005545273 systemd[1]: libpod-84a7054aa989a38fb20965a16a71f5c3a4e9483ab7d2ca790e4899100dcd0a77.scope: Deactivated successfully.
Dec  4 05:16:48 np0005545273 conmon[97473]: conmon 84a7054aa989a38fb209 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-84a7054aa989a38fb20965a16a71f5c3a4e9483ab7d2ca790e4899100dcd0a77.scope/container/memory.events
Dec  4 05:16:48 np0005545273 podman[97438]: 2025-12-04 10:16:48.470385458 +0000 UTC m=+0.747136585 container died 84a7054aa989a38fb20965a16a71f5c3a4e9483ab7d2ca790e4899100dcd0a77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:16:48 np0005545273 systemd[1]: var-lib-containers-storage-overlay-5c9656976207c12447ac36c5ebf3a3dd8cac4de5647875de0a83c358ed23c4ca-merged.mount: Deactivated successfully.
Dec  4 05:16:48 np0005545273 podman[97438]: 2025-12-04 10:16:48.521079152 +0000 UTC m=+0.797830259 container remove 84a7054aa989a38fb20965a16a71f5c3a4e9483ab7d2ca790e4899100dcd0a77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_poincare, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec  4 05:16:48 np0005545273 systemd[1]: libpod-conmon-84a7054aa989a38fb20965a16a71f5c3a4e9483ab7d2ca790e4899100dcd0a77.scope: Deactivated successfully.
Dec  4 05:16:48 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v110: 195 pgs: 1 creating+peering, 194 active+clean; 453 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Dec  4 05:16:48 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Dec  4 05:16:48 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Dec  4 05:16:48 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Dec  4 05:16:48 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Dec  4 05:16:48 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3790395680' entity='client.rgw.rgw.compute-0.jnsliu' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Dec  4 05:16:48 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/3790395680' entity='client.rgw.rgw.compute-0.jnsliu' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec  4 05:16:48 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Dec  4 05:16:48 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 49 pg[10.0( empty local-lis/les=0/0 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [2] r=0 lpr=49 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:48 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Dec  4 05:16:48 np0005545273 python3[97578]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:16:49 np0005545273 podman[97592]: 2025-12-04 10:16:49.004408565 +0000 UTC m=+0.047254962 container create c2c23761f320ad2a2ad3f1babfac1d2266563284aa53d36b8f0312220a63fbb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_rhodes, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  4 05:16:49 np0005545273 podman[97591]: 2025-12-04 10:16:49.019653036 +0000 UTC m=+0.061775095 container create ea89bb6c4d4647e5a84fd37630f8087040c0fed7d6bc37a2271e7880e52bb8c1 (image=quay.io/ceph/ceph:v20, name=elegant_cartwright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  4 05:16:49 np0005545273 systemd[1]: Started libpod-conmon-c2c23761f320ad2a2ad3f1babfac1d2266563284aa53d36b8f0312220a63fbb7.scope.
Dec  4 05:16:49 np0005545273 systemd[1]: Started libpod-conmon-ea89bb6c4d4647e5a84fd37630f8087040c0fed7d6bc37a2271e7880e52bb8c1.scope.
Dec  4 05:16:49 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:49 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:49 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56199c38204e5015d92decb6692e91a5469ad20b95dacec058998e23aea6b784/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:49 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56199c38204e5015d92decb6692e91a5469ad20b95dacec058998e23aea6b784/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:49 np0005545273 podman[97592]: 2025-12-04 10:16:48.983057745 +0000 UTC m=+0.025904152 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:16:49 np0005545273 podman[97592]: 2025-12-04 10:16:49.084107585 +0000 UTC m=+0.126953992 container init c2c23761f320ad2a2ad3f1babfac1d2266563284aa53d36b8f0312220a63fbb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_rhodes, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:16:49 np0005545273 podman[97591]: 2025-12-04 10:16:49.087190229 +0000 UTC m=+0.129312298 container init ea89bb6c4d4647e5a84fd37630f8087040c0fed7d6bc37a2271e7880e52bb8c1 (image=quay.io/ceph/ceph:v20, name=elegant_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:16:49 np0005545273 podman[97591]: 2025-12-04 10:16:48.997178589 +0000 UTC m=+0.039300658 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:16:49 np0005545273 podman[97591]: 2025-12-04 10:16:49.093507804 +0000 UTC m=+0.135629853 container start ea89bb6c4d4647e5a84fd37630f8087040c0fed7d6bc37a2271e7880e52bb8c1 (image=quay.io/ceph/ceph:v20, name=elegant_cartwright, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:16:49 np0005545273 podman[97592]: 2025-12-04 10:16:49.093984325 +0000 UTC m=+0.136830702 container start c2c23761f320ad2a2ad3f1babfac1d2266563284aa53d36b8f0312220a63fbb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec  4 05:16:49 np0005545273 podman[97591]: 2025-12-04 10:16:49.098012183 +0000 UTC m=+0.140134232 container attach ea89bb6c4d4647e5a84fd37630f8087040c0fed7d6bc37a2271e7880e52bb8c1 (image=quay.io/ceph/ceph:v20, name=elegant_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec  4 05:16:49 np0005545273 serene_rhodes[97621]: 167 167
Dec  4 05:16:49 np0005545273 systemd[1]: libpod-c2c23761f320ad2a2ad3f1babfac1d2266563284aa53d36b8f0312220a63fbb7.scope: Deactivated successfully.
Dec  4 05:16:49 np0005545273 podman[97592]: 2025-12-04 10:16:49.105537336 +0000 UTC m=+0.148383743 container attach c2c23761f320ad2a2ad3f1babfac1d2266563284aa53d36b8f0312220a63fbb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_rhodes, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec  4 05:16:49 np0005545273 podman[97592]: 2025-12-04 10:16:49.105956417 +0000 UTC m=+0.148802824 container died c2c23761f320ad2a2ad3f1babfac1d2266563284aa53d36b8f0312220a63fbb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec  4 05:16:49 np0005545273 systemd[1]: var-lib-containers-storage-overlay-980f4684966e580ef4244060841fa442939949e625dfd6ac11d2fddd3c026e4d-merged.mount: Deactivated successfully.
Dec  4 05:16:49 np0005545273 podman[97592]: 2025-12-04 10:16:49.147372884 +0000 UTC m=+0.190219271 container remove c2c23761f320ad2a2ad3f1babfac1d2266563284aa53d36b8f0312220a63fbb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_rhodes, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec  4 05:16:49 np0005545273 systemd[1]: libpod-conmon-c2c23761f320ad2a2ad3f1babfac1d2266563284aa53d36b8f0312220a63fbb7.scope: Deactivated successfully.
Dec  4 05:16:49 np0005545273 podman[97667]: 2025-12-04 10:16:49.309733186 +0000 UTC m=+0.045876257 container create b383c4099311a497bb30381bf5579717f55b85a9d878acf6069d6c9286c1d651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec  4 05:16:49 np0005545273 systemd[1]: Started libpod-conmon-b383c4099311a497bb30381bf5579717f55b85a9d878acf6069d6c9286c1d651.scope.
Dec  4 05:16:49 np0005545273 podman[97667]: 2025-12-04 10:16:49.288234683 +0000 UTC m=+0.024377764 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:16:49 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:49 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75221cceaa432ae36283f40b03e9ddb43c7250e2148f8e96de0adc121e39d3d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:49 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75221cceaa432ae36283f40b03e9ddb43c7250e2148f8e96de0adc121e39d3d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:49 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75221cceaa432ae36283f40b03e9ddb43c7250e2148f8e96de0adc121e39d3d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:49 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75221cceaa432ae36283f40b03e9ddb43c7250e2148f8e96de0adc121e39d3d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:49 np0005545273 podman[97667]: 2025-12-04 10:16:49.410652782 +0000 UTC m=+0.146795863 container init b383c4099311a497bb30381bf5579717f55b85a9d878acf6069d6c9286c1d651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec  4 05:16:49 np0005545273 podman[97667]: 2025-12-04 10:16:49.42867604 +0000 UTC m=+0.164819111 container start b383c4099311a497bb30381bf5579717f55b85a9d878acf6069d6c9286c1d651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  4 05:16:49 np0005545273 podman[97667]: 2025-12-04 10:16:49.432720019 +0000 UTC m=+0.168863100 container attach b383c4099311a497bb30381bf5579717f55b85a9d878acf6069d6c9286c1d651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Dec  4 05:16:49 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14264 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  4 05:16:49 np0005545273 elegant_cartwright[97622]: 
Dec  4 05:16:49 np0005545273 elegant_cartwright[97622]: [{"container_id": "821fa491a4b1", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "0.16%", "created": "2025-12-04T10:14:49.149243Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-12-04T10:14:49.224268Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-04T10:16:46.994957Z", "memory_usage": 7799308, "pending_daemon_config": false, "ports": [], "service_name": "crash", "started": "2025-12-04T10:14:49.022633Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d@crash.compute-0", "version": "20.2.0"}, {"container_id": "8653c026f7d4", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "7.68%", "created": "2025-12-04T10:16:45.505937Z", "daemon_id": "cephfs.compute-0.zcbnoq", "daemon_name": "mds.cephfs.compute-0.zcbnoq", "daemon_type": "mds", "events": ["2025-12-04T10:16:45.585806Z daemon:mds.cephfs.compute-0.zcbnoq [INFO] \"Deployed mds.cephfs.compute-0.zcbnoq on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-04T10:16:46.995352Z", "memory_usage": 16053698, "pending_daemon_config": false, "ports": [], "service_name": "mds.cephfs", "started": "2025-12-04T10:16:45.389247Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d@mds.cephfs.compute-0.zcbnoq", "version": "20.2.0"}, {"container_id": "aa9fc7b1d662", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "13.60%", "created": "2025-12-04T10:14:08.826528Z", "daemon_id": "compute-0.iwufnj", "daemon_name": "mgr.compute-0.iwufnj", "daemon_type": "mgr", "events": ["2025-12-04T10:14:54.221659Z daemon:mgr.compute-0.iwufnj [INFO] \"Reconfigured mgr.compute-0.iwufnj on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-04T10:16:46.994885Z", "memory_usage": 550292684, "pending_daemon_config": false, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-12-04T10:14:08.685306Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d@mgr.compute-0.iwufnj", "version": "20.2.0"}, {"container_id": "5c64ed29fbaf", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "2.61%", "created": "2025-12-04T10:14:03.447401Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-12-04T10:14:53.499043Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-04T10:16:46.994792Z", "memory_request": 2147483648, "memory_usage": 43557847, "pending_daemon_config": false, "ports": [], "service_name": "mon", "started": "2025-12-04T10:14:05.870884Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d@mon.compute-0", "version": "20.2.0"}, {"container_id": "f4a07ff69694", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.84%", "created": "2025-12-04T10:15:22.376969Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-12-04T10:15:22.438966Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-04T10:16:46.995027Z", "memory_request": 4294967296, "memory_usage": 69751275, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-04T10:15:22.242424Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d@osd.0", "version": "20.2.0"}, {"container_id": "f6ca53226c0f", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.94%", "created": "2025-12-04T10:15:27.740096Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-12-04T10:15:28.283050Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-04T10:16:46.995122Z", "memory_request": 4294967296, "memory_usage": 68325212, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-04T10:15:27.393328Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d@osd.1", "version": "20.2.0"}, {"container_id": "743bc5e794db", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.98%", "created": "2025-12-04T10:15:37.213321Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-12-04T10:15:37.349843Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-04T10:16:46.995196Z", "memory_request": 4294967296, "memory_usage": 67643637, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-04T10:15:36.977634Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d@osd.2", "version": "20.2.0"}, {"container_id": "94b64ba6339c", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac68
Dec  4 05:16:49 np0005545273 systemd[1]: libpod-ea89bb6c4d4647e5a84fd37630f8087040c0fed7d6bc37a2271e7880e52bb8c1.scope: Deactivated successfully.
Dec  4 05:16:49 np0005545273 podman[97591]: 2025-12-04 10:16:49.571878106 +0000 UTC m=+0.614000175 container died ea89bb6c4d4647e5a84fd37630f8087040c0fed7d6bc37a2271e7880e52bb8c1 (image=quay.io/ceph/ceph:v20, name=elegant_cartwright, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  4 05:16:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:16:49 np0005545273 rsyslogd[1007]: message too long (8842) with configured size 8096, begin of message is: [{"container_id": "821fa491a4b1", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  4 05:16:49 np0005545273 ceph-mgr[75651]: [progress INFO root] Writing back 12 completed events
Dec  4 05:16:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  4 05:16:49 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:49 np0005545273 systemd[1]: var-lib-containers-storage-overlay-56199c38204e5015d92decb6692e91a5469ad20b95dacec058998e23aea6b784-merged.mount: Deactivated successfully.
Dec  4 05:16:49 np0005545273 podman[97591]: 2025-12-04 10:16:49.619626278 +0000 UTC m=+0.661748337 container remove ea89bb6c4d4647e5a84fd37630f8087040c0fed7d6bc37a2271e7880e52bb8c1 (image=quay.io/ceph/ceph:v20, name=elegant_cartwright, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:16:49 np0005545273 systemd[1]: libpod-conmon-ea89bb6c4d4647e5a84fd37630f8087040c0fed7d6bc37a2271e7880e52bb8c1.scope: Deactivated successfully.
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]: {
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:    "0": [
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:        {
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            "devices": [
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "/dev/loop3"
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            ],
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            "lv_name": "ceph_lv0",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            "lv_size": "21470642176",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            "name": "ceph_lv0",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            "tags": {
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.cluster_name": "ceph",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.crush_device_class": "",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.encrypted": "0",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.objectstore": "bluestore",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.osd_id": "0",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.type": "block",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.vdo": "0",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.with_tpm": "0"
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            },
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            "type": "block",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            "vg_name": "ceph_vg0"
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:        }
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:    ],
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:    "1": [
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:        {
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            "devices": [
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "/dev/loop4"
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            ],
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            "lv_name": "ceph_lv1",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            "lv_size": "21470642176",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            "name": "ceph_lv1",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            "tags": {
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.cluster_name": "ceph",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.crush_device_class": "",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.encrypted": "0",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.objectstore": "bluestore",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.osd_id": "1",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.type": "block",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.vdo": "0",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.with_tpm": "0"
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            },
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            "type": "block",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            "vg_name": "ceph_vg1"
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:        }
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:    ],
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:    "2": [
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:        {
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            "devices": [
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "/dev/loop5"
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            ],
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            "lv_name": "ceph_lv2",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            "lv_size": "21470642176",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            "name": "ceph_lv2",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            "tags": {
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.cluster_name": "ceph",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.crush_device_class": "",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.encrypted": "0",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.objectstore": "bluestore",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.osd_id": "2",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.type": "block",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.vdo": "0",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:                "ceph.with_tpm": "0"
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            },
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            "type": "block",
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:            "vg_name": "ceph_vg2"
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:        }
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]:    ]
Dec  4 05:16:49 np0005545273 wonderful_swartz[97684]: }
Dec  4 05:16:49 np0005545273 systemd[1]: libpod-b383c4099311a497bb30381bf5579717f55b85a9d878acf6069d6c9286c1d651.scope: Deactivated successfully.
Dec  4 05:16:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Dec  4 05:16:49 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3790395680' entity='client.rgw.rgw.compute-0.jnsliu' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec  4 05:16:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Dec  4 05:16:49 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Dec  4 05:16:49 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 50 pg[10.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [2] r=0 lpr=49 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:49 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/3790395680' entity='client.rgw.rgw.compute-0.jnsliu' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Dec  4 05:16:49 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:49 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/3790395680' entity='client.rgw.rgw.compute-0.jnsliu' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec  4 05:16:49 np0005545273 podman[97707]: 2025-12-04 10:16:49.850511488 +0000 UTC m=+0.042278191 container died b383c4099311a497bb30381bf5579717f55b85a9d878acf6069d6c9286c1d651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_swartz, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:16:49 np0005545273 systemd[1]: var-lib-containers-storage-overlay-75221cceaa432ae36283f40b03e9ddb43c7250e2148f8e96de0adc121e39d3d2-merged.mount: Deactivated successfully.
Dec  4 05:16:49 np0005545273 podman[97707]: 2025-12-04 10:16:49.906345996 +0000 UTC m=+0.098112719 container remove b383c4099311a497bb30381bf5579717f55b85a9d878acf6069d6c9286c1d651 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_swartz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec  4 05:16:49 np0005545273 systemd[1]: libpod-conmon-b383c4099311a497bb30381bf5579717f55b85a9d878acf6069d6c9286c1d651.scope: Deactivated successfully.
Dec  4 05:16:50 np0005545273 podman[97787]: 2025-12-04 10:16:50.46078296 +0000 UTC m=+0.058004673 container create a8abe5dfdc55f91bc2753259891191e70d135303efa38f181113e2c8064abb1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_curie, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:16:50 np0005545273 systemd[1]: Started libpod-conmon-a8abe5dfdc55f91bc2753259891191e70d135303efa38f181113e2c8064abb1f.scope.
Dec  4 05:16:50 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:50 np0005545273 podman[97787]: 2025-12-04 10:16:50.431907278 +0000 UTC m=+0.029129081 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:16:50 np0005545273 podman[97787]: 2025-12-04 10:16:50.536459472 +0000 UTC m=+0.133681205 container init a8abe5dfdc55f91bc2753259891191e70d135303efa38f181113e2c8064abb1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  4 05:16:50 np0005545273 podman[97787]: 2025-12-04 10:16:50.544546049 +0000 UTC m=+0.141767762 container start a8abe5dfdc55f91bc2753259891191e70d135303efa38f181113e2c8064abb1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True)
Dec  4 05:16:50 np0005545273 podman[97787]: 2025-12-04 10:16:50.549200713 +0000 UTC m=+0.146422516 container attach a8abe5dfdc55f91bc2753259891191e70d135303efa38f181113e2c8064abb1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec  4 05:16:50 np0005545273 priceless_curie[97803]: 167 167
Dec  4 05:16:50 np0005545273 systemd[1]: libpod-a8abe5dfdc55f91bc2753259891191e70d135303efa38f181113e2c8064abb1f.scope: Deactivated successfully.
Dec  4 05:16:50 np0005545273 podman[97787]: 2025-12-04 10:16:50.552427251 +0000 UTC m=+0.149649024 container died a8abe5dfdc55f91bc2753259891191e70d135303efa38f181113e2c8064abb1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_curie, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:16:50 np0005545273 systemd[1]: var-lib-containers-storage-overlay-f56cd112e90e88008c3fa92384ab92ed12c510dfe2b4923653865c46c2ce418d-merged.mount: Deactivated successfully.
Dec  4 05:16:50 np0005545273 podman[97787]: 2025-12-04 10:16:50.602756136 +0000 UTC m=+0.199977859 container remove a8abe5dfdc55f91bc2753259891191e70d135303efa38f181113e2c8064abb1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_curie, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:16:50 np0005545273 systemd[1]: libpod-conmon-a8abe5dfdc55f91bc2753259891191e70d135303efa38f181113e2c8064abb1f.scope: Deactivated successfully.
Dec  4 05:16:50 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v113: 196 pgs: 1 unknown, 1 creating+peering, 194 active+clean; 453 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 2.0 KiB/s wr, 9 op/s
Dec  4 05:16:50 np0005545273 python3[97844]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:16:50 np0005545273 ceph-mds[96299]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Dec  4 05:16:50 np0005545273 podman[97852]: 2025-12-04 10:16:50.76769535 +0000 UTC m=+0.046046902 container create 88619d35a415b26a03262f9c087ba982b80c9286a83c8bbfd37ac963aecbd533 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec  4 05:16:50 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mds-cephfs-compute-0-zcbnoq[96293]: 2025-12-04T10:16:50.766+0000 7efc31a2c640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Dec  4 05:16:50 np0005545273 podman[97859]: 2025-12-04 10:16:50.78785683 +0000 UTC m=+0.046658326 container create d570701a86c847c2f3565d18688aa1bde587a97860a229c57d7f2bce88cbe880 (image=quay.io/ceph/ceph:v20, name=eager_visvesvaraya, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  4 05:16:50 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Dec  4 05:16:50 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Dec  4 05:16:50 np0005545273 systemd[1]: Started libpod-conmon-88619d35a415b26a03262f9c087ba982b80c9286a83c8bbfd37ac963aecbd533.scope.
Dec  4 05:16:50 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Dec  4 05:16:50 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Dec  4 05:16:50 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3790395680' entity='client.rgw.rgw.compute-0.jnsliu' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Dec  4 05:16:50 np0005545273 systemd[1]: Started libpod-conmon-d570701a86c847c2f3565d18688aa1bde587a97860a229c57d7f2bce88cbe880.scope.
Dec  4 05:16:50 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/3790395680' entity='client.rgw.rgw.compute-0.jnsliu' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Dec  4 05:16:50 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:50 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ddfedbd0c2fa07fd7ceb462d43b0c16fcde249aea48779a95657e4466547be2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:50 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ddfedbd0c2fa07fd7ceb462d43b0c16fcde249aea48779a95657e4466547be2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:50 np0005545273 podman[97852]: 2025-12-04 10:16:50.748009051 +0000 UTC m=+0.026360623 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:16:50 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ddfedbd0c2fa07fd7ceb462d43b0c16fcde249aea48779a95657e4466547be2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:50 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ddfedbd0c2fa07fd7ceb462d43b0c16fcde249aea48779a95657e4466547be2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:50 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:50 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1bb4b2c68cd5b66f140664013c5635d3296284cca13782d87a3e3702bb497fa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:50 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1bb4b2c68cd5b66f140664013c5635d3296284cca13782d87a3e3702bb497fa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:50 np0005545273 podman[97852]: 2025-12-04 10:16:50.855406214 +0000 UTC m=+0.133757806 container init 88619d35a415b26a03262f9c087ba982b80c9286a83c8bbfd37ac963aecbd533 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_davinci, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec  4 05:16:50 np0005545273 podman[97859]: 2025-12-04 10:16:50.768075849 +0000 UTC m=+0.026877365 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:16:50 np0005545273 podman[97859]: 2025-12-04 10:16:50.865047919 +0000 UTC m=+0.123849445 container init d570701a86c847c2f3565d18688aa1bde587a97860a229c57d7f2bce88cbe880 (image=quay.io/ceph/ceph:v20, name=eager_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:16:50 np0005545273 podman[97852]: 2025-12-04 10:16:50.865304035 +0000 UTC m=+0.143655587 container start 88619d35a415b26a03262f9c087ba982b80c9286a83c8bbfd37ac963aecbd533 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_davinci, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:16:50 np0005545273 podman[97852]: 2025-12-04 10:16:50.869717622 +0000 UTC m=+0.148069174 container attach 88619d35a415b26a03262f9c087ba982b80c9286a83c8bbfd37ac963aecbd533 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:16:50 np0005545273 podman[97859]: 2025-12-04 10:16:50.872407238 +0000 UTC m=+0.131208734 container start d570701a86c847c2f3565d18688aa1bde587a97860a229c57d7f2bce88cbe880 (image=quay.io/ceph/ceph:v20, name=eager_visvesvaraya, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:16:50 np0005545273 podman[97859]: 2025-12-04 10:16:50.876476687 +0000 UTC m=+0.135278183 container attach d570701a86c847c2f3565d18688aa1bde587a97860a229c57d7f2bce88cbe880 (image=quay.io/ceph/ceph:v20, name=eager_visvesvaraya, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:16:50 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Dec  4 05:16:50 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Dec  4 05:16:51 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 51 pg[11.0( empty local-lis/les=0/0 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [1] r=0 lpr=51 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:16:51 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec  4 05:16:51 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3565692539' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec  4 05:16:51 np0005545273 eager_visvesvaraya[97886]: 
Dec  4 05:16:51 np0005545273 eager_visvesvaraya[97886]: {"fsid":"f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":164,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":51,"num_osds":3,"num_up_osds":3,"osd_up_since":1764843345,"num_in_osds":3,"osd_in_since":1764843314,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":194},{"state_name":"creating+peering","count":1},{"state_name":"unknown","count":1}],"num_pgs":196,"num_pools":10,"num_objects":29,"data_bytes":463390,"bytes_used":84447232,"bytes_avail":64327479296,"bytes_total":64411926528,"unknown_pgs_ratio":0.0051020407117903233,"inactive_pgs_ratio":0.0051020407117903233,"read_bytes_sec":1279,"write_bytes_sec":2047,"read_op_per_sec":0,"write_op_per_sec":8},"fsmap":{"epoch":5,"btime":"2025-12-04T10:16:46:764153+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.zcbnoq","status":"up:active","gid":14255}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":5,"modified":"2025-12-04T10:16:46.700761+0000","services":{"mds":{"daemons":{"summary":"","cephfs.compute-0.zcbnoq":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Dec  4 05:16:51 np0005545273 systemd[1]: libpod-d570701a86c847c2f3565d18688aa1bde587a97860a229c57d7f2bce88cbe880.scope: Deactivated successfully.
Dec  4 05:16:51 np0005545273 podman[97965]: 2025-12-04 10:16:51.474054971 +0000 UTC m=+0.030265808 container died d570701a86c847c2f3565d18688aa1bde587a97860a229c57d7f2bce88cbe880 (image=quay.io/ceph/ceph:v20, name=eager_visvesvaraya, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  4 05:16:51 np0005545273 systemd[1]: var-lib-containers-storage-overlay-a1bb4b2c68cd5b66f140664013c5635d3296284cca13782d87a3e3702bb497fa-merged.mount: Deactivated successfully.
Dec  4 05:16:51 np0005545273 podman[97965]: 2025-12-04 10:16:51.515796067 +0000 UTC m=+0.072006884 container remove d570701a86c847c2f3565d18688aa1bde587a97860a229c57d7f2bce88cbe880 (image=quay.io/ceph/ceph:v20, name=eager_visvesvaraya, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec  4 05:16:51 np0005545273 systemd[1]: libpod-conmon-d570701a86c847c2f3565d18688aa1bde587a97860a229c57d7f2bce88cbe880.scope: Deactivated successfully.
Dec  4 05:16:51 np0005545273 lvm[98003]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:16:51 np0005545273 lvm[98002]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:16:51 np0005545273 lvm[98003]: VG ceph_vg1 finished
Dec  4 05:16:51 np0005545273 lvm[98002]: VG ceph_vg0 finished
Dec  4 05:16:51 np0005545273 lvm[98005]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:16:51 np0005545273 lvm[98005]: VG ceph_vg2 finished
Dec  4 05:16:51 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Dec  4 05:16:51 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Dec  4 05:16:51 np0005545273 zealous_davinci[97884]: {}
Dec  4 05:16:51 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Dec  4 05:16:51 np0005545273 systemd[1]: libpod-88619d35a415b26a03262f9c087ba982b80c9286a83c8bbfd37ac963aecbd533.scope: Deactivated successfully.
Dec  4 05:16:51 np0005545273 systemd[1]: libpod-88619d35a415b26a03262f9c087ba982b80c9286a83c8bbfd37ac963aecbd533.scope: Consumed 1.515s CPU time.
Dec  4 05:16:51 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3790395680' entity='client.rgw.rgw.compute-0.jnsliu' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  4 05:16:51 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Dec  4 05:16:51 np0005545273 podman[97852]: 2025-12-04 10:16:51.816080265 +0000 UTC m=+1.094431817 container died 88619d35a415b26a03262f9c087ba982b80c9286a83c8bbfd37ac963aecbd533 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:16:51 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Dec  4 05:16:51 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Dec  4 05:16:51 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3790395680' entity='client.rgw.rgw.compute-0.jnsliu' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Dec  4 05:16:51 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.0( empty local-lis/les=51/52 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [1] r=0 lpr=51 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:16:51 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/3790395680' entity='client.rgw.rgw.compute-0.jnsliu' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  4 05:16:51 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/3790395680' entity='client.rgw.rgw.compute-0.jnsliu' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Dec  4 05:16:51 np0005545273 systemd[1]: var-lib-containers-storage-overlay-6ddfedbd0c2fa07fd7ceb462d43b0c16fcde249aea48779a95657e4466547be2-merged.mount: Deactivated successfully.
Dec  4 05:16:51 np0005545273 podman[97852]: 2025-12-04 10:16:51.874264702 +0000 UTC m=+1.152616254 container remove 88619d35a415b26a03262f9c087ba982b80c9286a83c8bbfd37ac963aecbd533 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_davinci, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:16:51 np0005545273 systemd[1]: libpod-conmon-88619d35a415b26a03262f9c087ba982b80c9286a83c8bbfd37ac963aecbd533.scope: Deactivated successfully.
Dec  4 05:16:51 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Dec  4 05:16:51 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Dec  4 05:16:51 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:16:51 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:51 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:16:51 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:52 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:16:52 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:52 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:16:52 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:52 np0005545273 python3[98128]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:16:52 np0005545273 podman[98156]: 2025-12-04 10:16:52.660072937 +0000 UTC m=+0.051396283 container create 75107944c93c90f25309c96c2746b1c0c74ab0da850a567e717f73a291da299c (image=quay.io/ceph/ceph:v20, name=suspicious_rosalind, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec  4 05:16:52 np0005545273 systemd[1]: Started libpod-conmon-75107944c93c90f25309c96c2746b1c0c74ab0da850a567e717f73a291da299c.scope.
Dec  4 05:16:52 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v116: 197 pgs: 197 active+clean; 453 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s
Dec  4 05:16:52 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:52 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b4bb803ce8ecf05b62396cd429e69055c2c0074c6609e2d2cc2870716442f04/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:52 np0005545273 podman[98156]: 2025-12-04 10:16:52.642021107 +0000 UTC m=+0.033344443 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:16:52 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b4bb803ce8ecf05b62396cd429e69055c2c0074c6609e2d2cc2870716442f04/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:52 np0005545273 podman[98179]: 2025-12-04 10:16:52.745969458 +0000 UTC m=+0.081038984 container exec 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS)
Dec  4 05:16:52 np0005545273 podman[98156]: 2025-12-04 10:16:52.749872442 +0000 UTC m=+0.141195798 container init 75107944c93c90f25309c96c2746b1c0c74ab0da850a567e717f73a291da299c (image=quay.io/ceph/ceph:v20, name=suspicious_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:16:52 np0005545273 podman[98156]: 2025-12-04 10:16:52.756548395 +0000 UTC m=+0.147871741 container start 75107944c93c90f25309c96c2746b1c0c74ab0da850a567e717f73a291da299c (image=quay.io/ceph/ceph:v20, name=suspicious_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Dec  4 05:16:52 np0005545273 podman[98156]: 2025-12-04 10:16:52.759477236 +0000 UTC m=+0.150800602 container attach 75107944c93c90f25309c96c2746b1c0c74ab0da850a567e717f73a291da299c (image=quay.io/ceph/ceph:v20, name=suspicious_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  4 05:16:52 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Dec  4 05:16:52 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3790395680' entity='client.rgw.rgw.compute-0.jnsliu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  4 05:16:52 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Dec  4 05:16:52 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Dec  4 05:16:52 np0005545273 podman[98179]: 2025-12-04 10:16:52.840806735 +0000 UTC m=+0.175876281 container exec_died 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:16:52 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:52 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:52 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:52 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:52 np0005545273 ceph-mon[75358]: from='client.? 192.168.122.100:0/3790395680' entity='client.rgw.rgw.compute-0.jnsliu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  4 05:16:53 np0005545273 radosgw[95892]: v1 topic migration: starting v1 topic migration..
Dec  4 05:16:53 np0005545273 radosgw[95892]: v1 topic migration: finished v1 topic migration
Dec  4 05:16:53 np0005545273 radosgw[95892]: framework: beast
Dec  4 05:16:53 np0005545273 radosgw[95892]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Dec  4 05:16:53 np0005545273 radosgw[95892]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Dec  4 05:16:53 np0005545273 radosgw[95892]: starting handler: beast
Dec  4 05:16:53 np0005545273 radosgw[95892]: set uid:gid to 167:167 (ceph:ceph)
Dec  4 05:16:53 np0005545273 radosgw[95892]: mgrc service_daemon_register rgw.14258 metadata {arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.jnsliu,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=ec3e4f8e-ad34-4c75-8bd1-299db07ac24d,zone_name=default,zonegroup_id=6e153e17-7b8b-4b77-9534-ddba9e20c703,zonegroup_name=default}
Dec  4 05:16:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  4 05:16:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/101598567' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec  4 05:16:53 np0005545273 suspicious_rosalind[98195]: 
Dec  4 05:16:53 np0005545273 suspicious_rosalind[98195]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.jnsliu","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Dec  4 05:16:53 np0005545273 systemd[1]: libpod-75107944c93c90f25309c96c2746b1c0c74ab0da850a567e717f73a291da299c.scope: Deactivated successfully.
Dec  4 05:16:53 np0005545273 podman[98156]: 2025-12-04 10:16:53.190132388 +0000 UTC m=+0.581455724 container died 75107944c93c90f25309c96c2746b1c0c74ab0da850a567e717f73a291da299c (image=quay.io/ceph/ceph:v20, name=suspicious_rosalind, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec  4 05:16:53 np0005545273 systemd[1]: var-lib-containers-storage-overlay-3b4bb803ce8ecf05b62396cd429e69055c2c0074c6609e2d2cc2870716442f04-merged.mount: Deactivated successfully.
Dec  4 05:16:53 np0005545273 podman[98156]: 2025-12-04 10:16:53.231595357 +0000 UTC m=+0.622918693 container remove 75107944c93c90f25309c96c2746b1c0c74ab0da850a567e717f73a291da299c (image=quay.io/ceph/ceph:v20, name=suspicious_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:16:53 np0005545273 systemd[1]: libpod-conmon-75107944c93c90f25309c96c2746b1c0c74ab0da850a567e717f73a291da299c.scope: Deactivated successfully.
Dec  4 05:16:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:16:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:16:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:53 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:53 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:53 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Dec  4 05:16:53 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Dec  4 05:16:54 np0005545273 python3[98529]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:16:54 np0005545273 podman[98532]: 2025-12-04 10:16:54.472518469 +0000 UTC m=+0.067180196 container create 61bb4deb84daf3b2c4d895866d4f087421e599ba60ed4c4d3a9558cbc737e985 (image=quay.io/ceph/ceph:v20, name=lucid_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec  4 05:16:54 np0005545273 systemd[1]: Started libpod-conmon-61bb4deb84daf3b2c4d895866d4f087421e599ba60ed4c4d3a9558cbc737e985.scope.
Dec  4 05:16:54 np0005545273 podman[98532]: 2025-12-04 10:16:54.443692147 +0000 UTC m=+0.038353924 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:16:54 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:54 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01891595a3a1098bf5d7ca145449f99ee6e46cc35a07a708f89503a9ad275032/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:54 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01891595a3a1098bf5d7ca145449f99ee6e46cc35a07a708f89503a9ad275032/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:16:54 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:16:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:16:54 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:16:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:16:54 np0005545273 podman[98532]: 2025-12-04 10:16:54.568985817 +0000 UTC m=+0.163647554 container init 61bb4deb84daf3b2c4d895866d4f087421e599ba60ed4c4d3a9558cbc737e985 (image=quay.io/ceph/ceph:v20, name=lucid_johnson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:16:54 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:16:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:16:54 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:16:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:16:54 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:16:54 np0005545273 podman[98532]: 2025-12-04 10:16:54.580929567 +0000 UTC m=+0.175591294 container start 61bb4deb84daf3b2c4d895866d4f087421e599ba60ed4c4d3a9558cbc737e985 (image=quay.io/ceph/ceph:v20, name=lucid_johnson, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  4 05:16:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:16:54 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:16:54 np0005545273 podman[98532]: 2025-12-04 10:16:54.585188231 +0000 UTC m=+0.179850048 container attach 61bb4deb84daf3b2c4d895866d4f087421e599ba60ed4c4d3a9558cbc737e985 (image=quay.io/ceph/ceph:v20, name=lucid_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:16:54 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v118: 197 pgs: 197 active+clean; 453 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 209 B/s rd, 418 B/s wr, 1 op/s
Dec  4 05:16:54 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Dec  4 05:16:54 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Dec  4 05:16:54 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:16:54 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:54 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:16:55 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Dec  4 05:16:55 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2773187369' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Dec  4 05:16:55 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Dec  4 05:16:55 np0005545273 lucid_johnson[98562]: mimic
Dec  4 05:16:55 np0005545273 podman[98648]: 2025-12-04 10:16:55.017310488 +0000 UTC m=+0.053465982 container create b2b10dc1e43c28165a4d8bad71b5983a87eb08443c6e469ebc86d60c3dd006f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_poitras, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  4 05:16:55 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Dec  4 05:16:55 np0005545273 systemd[1]: libpod-61bb4deb84daf3b2c4d895866d4f087421e599ba60ed4c4d3a9558cbc737e985.scope: Deactivated successfully.
Dec  4 05:16:55 np0005545273 podman[98532]: 2025-12-04 10:16:55.030078609 +0000 UTC m=+0.624740336 container died 61bb4deb84daf3b2c4d895866d4f087421e599ba60ed4c4d3a9558cbc737e985 (image=quay.io/ceph/ceph:v20, name=lucid_johnson, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:16:55 np0005545273 systemd[1]: Started libpod-conmon-b2b10dc1e43c28165a4d8bad71b5983a87eb08443c6e469ebc86d60c3dd006f0.scope.
Dec  4 05:16:55 np0005545273 systemd[1]: var-lib-containers-storage-overlay-01891595a3a1098bf5d7ca145449f99ee6e46cc35a07a708f89503a9ad275032-merged.mount: Deactivated successfully.
Dec  4 05:16:55 np0005545273 podman[98532]: 2025-12-04 10:16:55.073891725 +0000 UTC m=+0.668553452 container remove 61bb4deb84daf3b2c4d895866d4f087421e599ba60ed4c4d3a9558cbc737e985 (image=quay.io/ceph/ceph:v20, name=lucid_johnson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec  4 05:16:55 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:55 np0005545273 podman[98648]: 2025-12-04 10:16:54.983150456 +0000 UTC m=+0.019306030 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:16:55 np0005545273 systemd[1]: libpod-conmon-61bb4deb84daf3b2c4d895866d4f087421e599ba60ed4c4d3a9558cbc737e985.scope: Deactivated successfully.
Dec  4 05:16:55 np0005545273 podman[98648]: 2025-12-04 10:16:55.089927856 +0000 UTC m=+0.126083360 container init b2b10dc1e43c28165a4d8bad71b5983a87eb08443c6e469ebc86d60c3dd006f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec  4 05:16:55 np0005545273 podman[98648]: 2025-12-04 10:16:55.094851295 +0000 UTC m=+0.131006779 container start b2b10dc1e43c28165a4d8bad71b5983a87eb08443c6e469ebc86d60c3dd006f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_poitras, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:16:55 np0005545273 podman[98648]: 2025-12-04 10:16:55.098231358 +0000 UTC m=+0.134386852 container attach b2b10dc1e43c28165a4d8bad71b5983a87eb08443c6e469ebc86d60c3dd006f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_poitras, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:16:55 np0005545273 elated_poitras[98674]: 167 167
Dec  4 05:16:55 np0005545273 systemd[1]: libpod-b2b10dc1e43c28165a4d8bad71b5983a87eb08443c6e469ebc86d60c3dd006f0.scope: Deactivated successfully.
Dec  4 05:16:55 np0005545273 conmon[98674]: conmon b2b10dc1e43c28165a4d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b2b10dc1e43c28165a4d8bad71b5983a87eb08443c6e469ebc86d60c3dd006f0.scope/container/memory.events
Dec  4 05:16:55 np0005545273 podman[98684]: 2025-12-04 10:16:55.148134242 +0000 UTC m=+0.031664731 container died b2b10dc1e43c28165a4d8bad71b5983a87eb08443c6e469ebc86d60c3dd006f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:16:55 np0005545273 systemd[1]: var-lib-containers-storage-overlay-38ad49f701e8d88eb95cef9b9753019a749e7ae149baf86cb99b466e0d538b78-merged.mount: Deactivated successfully.
Dec  4 05:16:55 np0005545273 podman[98684]: 2025-12-04 10:16:55.181657748 +0000 UTC m=+0.065188217 container remove b2b10dc1e43c28165a4d8bad71b5983a87eb08443c6e469ebc86d60c3dd006f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:16:55 np0005545273 systemd[1]: libpod-conmon-b2b10dc1e43c28165a4d8bad71b5983a87eb08443c6e469ebc86d60c3dd006f0.scope: Deactivated successfully.
Dec  4 05:16:55 np0005545273 podman[98705]: 2025-12-04 10:16:55.371525039 +0000 UTC m=+0.048240376 container create 28248f02b435ca2acc5fc09f5660cf1210e64ec617e6f769b6215302c9c30ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_yonath, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:16:55 np0005545273 systemd[1]: Started libpod-conmon-28248f02b435ca2acc5fc09f5660cf1210e64ec617e6f769b6215302c9c30ce0.scope.
Dec  4 05:16:55 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:55 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8be01b803be1e749062825a10db74929cb3928f18219f52bbdc72135318d9679/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:55 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8be01b803be1e749062825a10db74929cb3928f18219f52bbdc72135318d9679/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:55 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8be01b803be1e749062825a10db74929cb3928f18219f52bbdc72135318d9679/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:55 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8be01b803be1e749062825a10db74929cb3928f18219f52bbdc72135318d9679/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:55 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8be01b803be1e749062825a10db74929cb3928f18219f52bbdc72135318d9679/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:55 np0005545273 podman[98705]: 2025-12-04 10:16:55.351131062 +0000 UTC m=+0.027846439 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:16:55 np0005545273 podman[98705]: 2025-12-04 10:16:55.468400477 +0000 UTC m=+0.145115854 container init 28248f02b435ca2acc5fc09f5660cf1210e64ec617e6f769b6215302c9c30ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_yonath, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  4 05:16:55 np0005545273 podman[98705]: 2025-12-04 10:16:55.477639072 +0000 UTC m=+0.154354419 container start 28248f02b435ca2acc5fc09f5660cf1210e64ec617e6f769b6215302c9c30ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_yonath, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:16:55 np0005545273 podman[98705]: 2025-12-04 10:16:55.480906471 +0000 UTC m=+0.157621848 container attach 28248f02b435ca2acc5fc09f5660cf1210e64ec617e6f769b6215302c9c30ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_yonath, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:16:55 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Dec  4 05:16:55 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Dec  4 05:16:56 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Dec  4 05:16:56 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Dec  4 05:16:56 np0005545273 mystifying_yonath[98722]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:16:56 np0005545273 mystifying_yonath[98722]: --> All data devices are unavailable
Dec  4 05:16:56 np0005545273 systemd[1]: libpod-28248f02b435ca2acc5fc09f5660cf1210e64ec617e6f769b6215302c9c30ce0.scope: Deactivated successfully.
Dec  4 05:16:56 np0005545273 podman[98705]: 2025-12-04 10:16:56.1164895 +0000 UTC m=+0.793204887 container died 28248f02b435ca2acc5fc09f5660cf1210e64ec617e6f769b6215302c9c30ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_yonath, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:16:56 np0005545273 systemd[1]: var-lib-containers-storage-overlay-8be01b803be1e749062825a10db74929cb3928f18219f52bbdc72135318d9679-merged.mount: Deactivated successfully.
Dec  4 05:16:56 np0005545273 podman[98705]: 2025-12-04 10:16:56.183384899 +0000 UTC m=+0.860100256 container remove 28248f02b435ca2acc5fc09f5660cf1210e64ec617e6f769b6215302c9c30ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:16:56 np0005545273 python3[98766]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:16:56 np0005545273 systemd[1]: libpod-conmon-28248f02b435ca2acc5fc09f5660cf1210e64ec617e6f769b6215302c9c30ce0.scope: Deactivated successfully.
Dec  4 05:16:56 np0005545273 podman[98780]: 2025-12-04 10:16:56.257972634 +0000 UTC m=+0.049355362 container create 761310633a859bf384d11ac2c519ca4fdd9de1309e4b29bde90c73e91790d1d9 (image=quay.io/ceph/ceph:v20, name=condescending_wilson, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:16:56 np0005545273 systemd[1]: Started libpod-conmon-761310633a859bf384d11ac2c519ca4fdd9de1309e4b29bde90c73e91790d1d9.scope.
Dec  4 05:16:56 np0005545273 podman[98780]: 2025-12-04 10:16:56.237093646 +0000 UTC m=+0.028476424 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:16:56 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:56 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c821843b42aa44b2bc748a8f3e7e145137d752fffe8b1ad9b2fefec1e856b760/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:56 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c821843b42aa44b2bc748a8f3e7e145137d752fffe8b1ad9b2fefec1e856b760/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:56 np0005545273 podman[98780]: 2025-12-04 10:16:56.360943179 +0000 UTC m=+0.152325927 container init 761310633a859bf384d11ac2c519ca4fdd9de1309e4b29bde90c73e91790d1d9 (image=quay.io/ceph/ceph:v20, name=condescending_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:16:56 np0005545273 podman[98780]: 2025-12-04 10:16:56.367151191 +0000 UTC m=+0.158533909 container start 761310633a859bf384d11ac2c519ca4fdd9de1309e4b29bde90c73e91790d1d9 (image=quay.io/ceph/ceph:v20, name=condescending_wilson, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec  4 05:16:56 np0005545273 podman[98780]: 2025-12-04 10:16:56.370707087 +0000 UTC m=+0.162089825 container attach 761310633a859bf384d11ac2c519ca4fdd9de1309e4b29bde90c73e91790d1d9 (image=quay.io/ceph/ceph:v20, name=condescending_wilson, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Dec  4 05:16:56 np0005545273 podman[98881]: 2025-12-04 10:16:56.65430212 +0000 UTC m=+0.043697345 container create 042fee1591855d378c3cb35f90b656efdaf32bc6d007c54cea88d117380cac64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bose, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  4 05:16:56 np0005545273 systemd[1]: Started libpod-conmon-042fee1591855d378c3cb35f90b656efdaf32bc6d007c54cea88d117380cac64.scope.
Dec  4 05:16:56 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v119: 197 pgs: 197 active+clean; 453 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 341 B/s wr, 1 op/s
Dec  4 05:16:56 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:56 np0005545273 podman[98881]: 2025-12-04 10:16:56.631304039 +0000 UTC m=+0.020699274 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:16:56 np0005545273 podman[98881]: 2025-12-04 10:16:56.742917596 +0000 UTC m=+0.132312831 container init 042fee1591855d378c3cb35f90b656efdaf32bc6d007c54cea88d117380cac64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bose, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec  4 05:16:56 np0005545273 podman[98881]: 2025-12-04 10:16:56.749461516 +0000 UTC m=+0.138856731 container start 042fee1591855d378c3cb35f90b656efdaf32bc6d007c54cea88d117380cac64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bose, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec  4 05:16:56 np0005545273 podman[98881]: 2025-12-04 10:16:56.75331692 +0000 UTC m=+0.142712145 container attach 042fee1591855d378c3cb35f90b656efdaf32bc6d007c54cea88d117380cac64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bose, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  4 05:16:56 np0005545273 vigorous_bose[98897]: 167 167
Dec  4 05:16:56 np0005545273 systemd[1]: libpod-042fee1591855d378c3cb35f90b656efdaf32bc6d007c54cea88d117380cac64.scope: Deactivated successfully.
Dec  4 05:16:56 np0005545273 podman[98881]: 2025-12-04 10:16:56.75497832 +0000 UTC m=+0.144373525 container died 042fee1591855d378c3cb35f90b656efdaf32bc6d007c54cea88d117380cac64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bose, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  4 05:16:56 np0005545273 systemd[1]: var-lib-containers-storage-overlay-6b4e38d366796cda994eeb5c57c5308fcc7bd71cbb3ce15aa97ff5182f29a0c0-merged.mount: Deactivated successfully.
Dec  4 05:16:56 np0005545273 podman[98881]: 2025-12-04 10:16:56.794169494 +0000 UTC m=+0.183564709 container remove 042fee1591855d378c3cb35f90b656efdaf32bc6d007c54cea88d117380cac64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bose, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:16:56 np0005545273 systemd[1]: libpod-conmon-042fee1591855d378c3cb35f90b656efdaf32bc6d007c54cea88d117380cac64.scope: Deactivated successfully.
Dec  4 05:16:56 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Dec  4 05:16:56 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/59517235' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Dec  4 05:16:56 np0005545273 condescending_wilson[98819]: 
Dec  4 05:16:56 np0005545273 condescending_wilson[98819]: {"mon":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"mgr":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"osd":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":3},"mds":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"rgw":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"overall":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":7}}
Dec  4 05:16:56 np0005545273 systemd[1]: libpod-761310633a859bf384d11ac2c519ca4fdd9de1309e4b29bde90c73e91790d1d9.scope: Deactivated successfully.
Dec  4 05:16:56 np0005545273 podman[98780]: 2025-12-04 10:16:56.883697443 +0000 UTC m=+0.675080171 container died 761310633a859bf384d11ac2c519ca4fdd9de1309e4b29bde90c73e91790d1d9 (image=quay.io/ceph/ceph:v20, name=condescending_wilson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  4 05:16:56 np0005545273 systemd[1]: var-lib-containers-storage-overlay-c821843b42aa44b2bc748a8f3e7e145137d752fffe8b1ad9b2fefec1e856b760-merged.mount: Deactivated successfully.
Dec  4 05:16:56 np0005545273 podman[98780]: 2025-12-04 10:16:56.934659053 +0000 UTC m=+0.726041781 container remove 761310633a859bf384d11ac2c519ca4fdd9de1309e4b29bde90c73e91790d1d9 (image=quay.io/ceph/ceph:v20, name=condescending_wilson, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:16:56 np0005545273 systemd[1]: libpod-conmon-761310633a859bf384d11ac2c519ca4fdd9de1309e4b29bde90c73e91790d1d9.scope: Deactivated successfully.
Dec  4 05:16:57 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Dec  4 05:16:57 np0005545273 podman[98935]: 2025-12-04 10:16:57.022582963 +0000 UTC m=+0.060960195 container create 065e313976af7b5b72abdc60be9da0386091df036b89585cb0d9655ec49954d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_swartz, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:16:57 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Dec  4 05:16:57 np0005545273 systemd[1]: Started libpod-conmon-065e313976af7b5b72abdc60be9da0386091df036b89585cb0d9655ec49954d7.scope.
Dec  4 05:16:57 np0005545273 podman[98935]: 2025-12-04 10:16:56.995790951 +0000 UTC m=+0.034168183 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:16:57 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:57 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ec9424e013e1e382c09a8dbddef812ee91ed8592d1bc8f10b642b71db93f36f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:57 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ec9424e013e1e382c09a8dbddef812ee91ed8592d1bc8f10b642b71db93f36f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:57 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ec9424e013e1e382c09a8dbddef812ee91ed8592d1bc8f10b642b71db93f36f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:57 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ec9424e013e1e382c09a8dbddef812ee91ed8592d1bc8f10b642b71db93f36f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:57 np0005545273 podman[98935]: 2025-12-04 10:16:57.121317985 +0000 UTC m=+0.159695227 container init 065e313976af7b5b72abdc60be9da0386091df036b89585cb0d9655ec49954d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:16:57 np0005545273 podman[98935]: 2025-12-04 10:16:57.130381976 +0000 UTC m=+0.168759188 container start 065e313976af7b5b72abdc60be9da0386091df036b89585cb0d9655ec49954d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:16:57 np0005545273 podman[98935]: 2025-12-04 10:16:57.133897881 +0000 UTC m=+0.172275093 container attach 065e313976af7b5b72abdc60be9da0386091df036b89585cb0d9655ec49954d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]: {
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:    "0": [
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:        {
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            "devices": [
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "/dev/loop3"
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            ],
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            "lv_name": "ceph_lv0",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            "lv_size": "21470642176",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            "name": "ceph_lv0",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            "tags": {
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.cluster_name": "ceph",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.crush_device_class": "",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.encrypted": "0",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.objectstore": "bluestore",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.osd_id": "0",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.type": "block",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.vdo": "0",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.with_tpm": "0"
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            },
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            "type": "block",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            "vg_name": "ceph_vg0"
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:        }
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:    ],
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:    "1": [
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:        {
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            "devices": [
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "/dev/loop4"
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            ],
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            "lv_name": "ceph_lv1",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            "lv_size": "21470642176",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            "name": "ceph_lv1",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            "tags": {
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.cluster_name": "ceph",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.crush_device_class": "",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.encrypted": "0",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.objectstore": "bluestore",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.osd_id": "1",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.type": "block",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.vdo": "0",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.with_tpm": "0"
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            },
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            "type": "block",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            "vg_name": "ceph_vg1"
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:        }
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:    ],
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:    "2": [
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:        {
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            "devices": [
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "/dev/loop5"
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            ],
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            "lv_name": "ceph_lv2",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            "lv_size": "21470642176",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            "name": "ceph_lv2",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            "tags": {
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.cluster_name": "ceph",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.crush_device_class": "",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.encrypted": "0",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.objectstore": "bluestore",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.osd_id": "2",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.type": "block",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.vdo": "0",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:                "ceph.with_tpm": "0"
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            },
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            "type": "block",
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:            "vg_name": "ceph_vg2"
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:        }
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]:    ]
Dec  4 05:16:57 np0005545273 flamboyant_swartz[98951]: }
Dec  4 05:16:57 np0005545273 systemd[1]: libpod-065e313976af7b5b72abdc60be9da0386091df036b89585cb0d9655ec49954d7.scope: Deactivated successfully.
Dec  4 05:16:57 np0005545273 podman[98935]: 2025-12-04 10:16:57.502884482 +0000 UTC m=+0.541261714 container died 065e313976af7b5b72abdc60be9da0386091df036b89585cb0d9655ec49954d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_swartz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  4 05:16:57 np0005545273 systemd[1]: var-lib-containers-storage-overlay-8ec9424e013e1e382c09a8dbddef812ee91ed8592d1bc8f10b642b71db93f36f-merged.mount: Deactivated successfully.
Dec  4 05:16:57 np0005545273 podman[98935]: 2025-12-04 10:16:57.567516715 +0000 UTC m=+0.605893967 container remove 065e313976af7b5b72abdc60be9da0386091df036b89585cb0d9655ec49954d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:16:57 np0005545273 systemd[1]: libpod-conmon-065e313976af7b5b72abdc60be9da0386091df036b89585cb0d9655ec49954d7.scope: Deactivated successfully.
Dec  4 05:16:57 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Dec  4 05:16:57 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Dec  4 05:16:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:16:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:16:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:16:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:16:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:16:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:16:57 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Dec  4 05:16:58 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Dec  4 05:16:58 np0005545273 podman[99033]: 2025-12-04 10:16:58.221459481 +0000 UTC m=+0.059400496 container create 13a6057773824f0ca25756897c6d2d389a92a820612ebb9e7e912a9acf461648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Dec  4 05:16:58 np0005545273 systemd[1]: Started libpod-conmon-13a6057773824f0ca25756897c6d2d389a92a820612ebb9e7e912a9acf461648.scope.
Dec  4 05:16:58 np0005545273 podman[99033]: 2025-12-04 10:16:58.18937813 +0000 UTC m=+0.027319225 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:16:58 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:58 np0005545273 podman[99033]: 2025-12-04 10:16:58.348880792 +0000 UTC m=+0.186821837 container init 13a6057773824f0ca25756897c6d2d389a92a820612ebb9e7e912a9acf461648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:16:58 np0005545273 podman[99033]: 2025-12-04 10:16:58.358013665 +0000 UTC m=+0.195954690 container start 13a6057773824f0ca25756897c6d2d389a92a820612ebb9e7e912a9acf461648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_goodall, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:16:58 np0005545273 podman[99033]: 2025-12-04 10:16:58.362929464 +0000 UTC m=+0.200870479 container attach 13a6057773824f0ca25756897c6d2d389a92a820612ebb9e7e912a9acf461648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  4 05:16:58 np0005545273 dazzling_goodall[99049]: 167 167
Dec  4 05:16:58 np0005545273 systemd[1]: libpod-13a6057773824f0ca25756897c6d2d389a92a820612ebb9e7e912a9acf461648.scope: Deactivated successfully.
Dec  4 05:16:58 np0005545273 conmon[99049]: conmon 13a6057773824f0ca257 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-13a6057773824f0ca25756897c6d2d389a92a820612ebb9e7e912a9acf461648.scope/container/memory.events
Dec  4 05:16:58 np0005545273 podman[99033]: 2025-12-04 10:16:58.368704084 +0000 UTC m=+0.206645089 container died 13a6057773824f0ca25756897c6d2d389a92a820612ebb9e7e912a9acf461648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_goodall, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  4 05:16:58 np0005545273 systemd[1]: var-lib-containers-storage-overlay-a7e1616885745e02e87bf322424567776841f1eab2b09468a2985a670cef01e7-merged.mount: Deactivated successfully.
Dec  4 05:16:58 np0005545273 podman[99033]: 2025-12-04 10:16:58.40836117 +0000 UTC m=+0.246302195 container remove 13a6057773824f0ca25756897c6d2d389a92a820612ebb9e7e912a9acf461648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_goodall, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:16:58 np0005545273 systemd[1]: libpod-conmon-13a6057773824f0ca25756897c6d2d389a92a820612ebb9e7e912a9acf461648.scope: Deactivated successfully.
Dec  4 05:16:58 np0005545273 podman[99074]: 2025-12-04 10:16:58.605542278 +0000 UTC m=+0.059226782 container create 8194b61f3c0fd5e7f49a2260b7c7cebbe0c1ebac1ad99e8c08312cbaa62aad85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec  4 05:16:58 np0005545273 systemd[1]: Started libpod-conmon-8194b61f3c0fd5e7f49a2260b7c7cebbe0c1ebac1ad99e8c08312cbaa62aad85.scope.
Dec  4 05:16:58 np0005545273 podman[99074]: 2025-12-04 10:16:58.575531278 +0000 UTC m=+0.029215782 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:16:58 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:16:58 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe5d72ecfe895ee3ed5476676c7ea79665d4a10e54f5438c5ddf2455a94b9067/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:58 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe5d72ecfe895ee3ed5476676c7ea79665d4a10e54f5438c5ddf2455a94b9067/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:58 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe5d72ecfe895ee3ed5476676c7ea79665d4a10e54f5438c5ddf2455a94b9067/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:58 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe5d72ecfe895ee3ed5476676c7ea79665d4a10e54f5438c5ddf2455a94b9067/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:16:58 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v120: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 8.4 KiB/s wr, 180 op/s
Dec  4 05:16:58 np0005545273 podman[99074]: 2025-12-04 10:16:58.711890337 +0000 UTC m=+0.165574881 container init 8194b61f3c0fd5e7f49a2260b7c7cebbe0c1ebac1ad99e8c08312cbaa62aad85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:16:58 np0005545273 podman[99074]: 2025-12-04 10:16:58.722150707 +0000 UTC m=+0.175835191 container start 8194b61f3c0fd5e7f49a2260b7c7cebbe0c1ebac1ad99e8c08312cbaa62aad85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wescoff, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True)
Dec  4 05:16:58 np0005545273 podman[99074]: 2025-12-04 10:16:58.727008715 +0000 UTC m=+0.180693219 container attach 8194b61f3c0fd5e7f49a2260b7c7cebbe0c1ebac1ad99e8c08312cbaa62aad85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  4 05:16:59 np0005545273 lvm[99169]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:16:59 np0005545273 lvm[99169]: VG ceph_vg0 finished
Dec  4 05:16:59 np0005545273 lvm[99170]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:16:59 np0005545273 lvm[99170]: VG ceph_vg1 finished
Dec  4 05:16:59 np0005545273 lvm[99172]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:16:59 np0005545273 lvm[99172]: VG ceph_vg2 finished
Dec  4 05:16:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:16:59 np0005545273 optimistic_wescoff[99091]: {}
Dec  4 05:16:59 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Dec  4 05:16:59 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Dec  4 05:16:59 np0005545273 systemd[1]: libpod-8194b61f3c0fd5e7f49a2260b7c7cebbe0c1ebac1ad99e8c08312cbaa62aad85.scope: Deactivated successfully.
Dec  4 05:16:59 np0005545273 systemd[1]: libpod-8194b61f3c0fd5e7f49a2260b7c7cebbe0c1ebac1ad99e8c08312cbaa62aad85.scope: Consumed 1.518s CPU time.
Dec  4 05:16:59 np0005545273 podman[99175]: 2025-12-04 10:16:59.729123735 +0000 UTC m=+0.032734368 container died 8194b61f3c0fd5e7f49a2260b7c7cebbe0c1ebac1ad99e8c08312cbaa62aad85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wescoff, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Dec  4 05:16:59 np0005545273 systemd[1]: var-lib-containers-storage-overlay-fe5d72ecfe895ee3ed5476676c7ea79665d4a10e54f5438c5ddf2455a94b9067-merged.mount: Deactivated successfully.
Dec  4 05:16:59 np0005545273 podman[99175]: 2025-12-04 10:16:59.780349512 +0000 UTC m=+0.083960135 container remove 8194b61f3c0fd5e7f49a2260b7c7cebbe0c1ebac1ad99e8c08312cbaa62aad85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:16:59 np0005545273 systemd[1]: libpod-conmon-8194b61f3c0fd5e7f49a2260b7c7cebbe0c1ebac1ad99e8c08312cbaa62aad85.scope: Deactivated successfully.
Dec  4 05:16:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:16:59 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:16:59 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:59 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Dec  4 05:16:59 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:59 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:16:59 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Dec  4 05:17:00 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v121: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 7.4 KiB/s wr, 160 op/s
Dec  4 05:17:00 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Dec  4 05:17:00 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Dec  4 05:17:01 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Dec  4 05:17:01 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Dec  4 05:17:02 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v122: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 6.4 KiB/s wr, 140 op/s
Dec  4 05:17:02 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.12 scrub starts
Dec  4 05:17:02 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.12 scrub ok
Dec  4 05:17:02 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Dec  4 05:17:02 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Dec  4 05:17:03 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.c scrub starts
Dec  4 05:17:03 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.c scrub ok
Dec  4 05:17:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:17:04 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v123: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 5.4 KiB/s wr, 118 op/s
Dec  4 05:17:04 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Dec  4 05:17:04 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Dec  4 05:17:05 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Dec  4 05:17:05 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Dec  4 05:17:05 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Dec  4 05:17:05 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Dec  4 05:17:06 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Dec  4 05:17:06 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Dec  4 05:17:06 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v124: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.3 KiB/s wr, 117 op/s
Dec  4 05:17:06 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.e scrub starts
Dec  4 05:17:06 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.e scrub ok
Dec  4 05:17:07 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Dec  4 05:17:07 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Dec  4 05:17:07 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.c scrub starts
Dec  4 05:17:07 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.c scrub ok
Dec  4 05:17:08 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Dec  4 05:17:08 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Dec  4 05:17:08 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v125: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.3 KiB/s wr, 117 op/s
Dec  4 05:17:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:17:09 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.b scrub starts
Dec  4 05:17:09 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.b scrub ok
Dec  4 05:17:10 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Dec  4 05:17:10 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Dec  4 05:17:10 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v126: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:17:10 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.a scrub starts
Dec  4 05:17:10 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.a scrub ok
Dec  4 05:17:11 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Dec  4 05:17:11 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Dec  4 05:17:12 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v127: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:17:12 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Dec  4 05:17:12 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Dec  4 05:17:13 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Dec  4 05:17:13 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Dec  4 05:17:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:17:14 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Dec  4 05:17:14 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Dec  4 05:17:14 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v128: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:17:15 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Dec  4 05:17:15 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Dec  4 05:17:15 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Dec  4 05:17:15 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Dec  4 05:17:16 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.b scrub starts
Dec  4 05:17:16 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.b scrub ok
Dec  4 05:17:16 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.e scrub starts
Dec  4 05:17:16 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.e scrub ok
Dec  4 05:17:16 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v129: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:17:17 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.d scrub starts
Dec  4 05:17:17 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.d scrub ok
Dec  4 05:17:18 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v130: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:17:18 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.1b scrub starts
Dec  4 05:17:18 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.1b scrub ok
Dec  4 05:17:19 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:17:20 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v131: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:17:22 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v132: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:17:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:17:24 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.d scrub starts
Dec  4 05:17:24 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.d scrub ok
Dec  4 05:17:24 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v133: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:17:24 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Dec  4 05:17:24 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Dec  4 05:17:25 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Dec  4 05:17:25 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Dec  4 05:17:25 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Dec  4 05:17:25 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Dec  4 05:17:26 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Dec  4 05:17:26 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Dec  4 05:17:26 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Dec  4 05:17:26 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Dec  4 05:17:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:17:26
Dec  4 05:17:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:17:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:17:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['images', '.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'volumes', 'default.rgw.log', 'default.rgw.meta']
Dec  4 05:17:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:17:26 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v134: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:17:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:17:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:17:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:17:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:17:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:17:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:17:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:17:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:17:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:17:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:17:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:17:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:17:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:17:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:17:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:17:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:17:28 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Dec  4 05:17:28 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Dec  4 05:17:28 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v135: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:17:28 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Dec  4 05:17:28 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Dec  4 05:17:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:17:29 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Dec  4 05:17:29 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Dec  4 05:17:30 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Dec  4 05:17:30 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Dec  4 05:17:30 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Dec  4 05:17:30 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Dec  4 05:17:30 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v136: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:17:32 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Dec  4 05:17:32 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Dec  4 05:17:32 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v137: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:17:32 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:17:32 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:17:32 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:17:32 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:17:32 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:17:32 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:17:32 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:17:32 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:17:32 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:17:32 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:17:32 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:17:32 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:17:32 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.041313365636459e-06 of space, bias 4.0, pg target 0.001249576038763751 quantized to 16 (current 32)
Dec  4 05:17:32 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:17:32 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:17:32 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:17:32 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Dec  4 05:17:32 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:17:32 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 1)
Dec  4 05:17:32 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:17:32 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  4 05:17:32 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:17:32 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Dec  4 05:17:32 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Dec  4 05:17:32 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Dec  4 05:17:33 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Dec  4 05:17:33 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Dec  4 05:17:33 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec  4 05:17:33 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Dec  4 05:17:33 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Dec  4 05:17:33 np0005545273 ceph-mgr[75651]: [progress INFO root] update: starting ev 1d71047a-4d95-4992-ae10-32ab2e31248c (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec  4 05:17:33 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Dec  4 05:17:33 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Dec  4 05:17:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:17:34 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v139: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:17:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Dec  4 05:17:34 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Dec  4 05:17:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Dec  4 05:17:34 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec  4 05:17:34 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec  4 05:17:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Dec  4 05:17:34 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec  4 05:17:34 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Dec  4 05:17:34 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Dec  4 05:17:34 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Dec  4 05:17:34 np0005545273 ceph-mgr[75651]: [progress INFO root] update: starting ev b15e31a2-b6d9-4e61-b1ee-d435defc20a6 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec  4 05:17:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Dec  4 05:17:34 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Dec  4 05:17:35 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Dec  4 05:17:35 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Dec  4 05:17:35 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Dec  4 05:17:35 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec  4 05:17:35 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Dec  4 05:17:35 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Dec  4 05:17:35 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[8.0( v 46'6 (0'0,46'6] local-lis/les=45/46 n=6 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=55 pruub=13.839031219s) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 46'5 mlcod 46'5 active pruub 140.592666626s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:35 np0005545273 ceph-mgr[75651]: [progress INFO root] update: starting ev 093b82ae-26b3-44c2-a36c-c3612113336c (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec  4 05:17:35 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Dec  4 05:17:35 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Dec  4 05:17:35 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec  4 05:17:35 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec  4 05:17:35 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Dec  4 05:17:35 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.0( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=55 pruub=13.839031219s) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 46'5 mlcod 0'0 unknown pruub 140.592666626s@ mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:35 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.14( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:35 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.11( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:35 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.e( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:35 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.17( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:35 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.16( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:35 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.7( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:35 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1a( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:35 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1d( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:35 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.f( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:35 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.2( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:35 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.13( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:35 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.9( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:35 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.19( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:35 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.15( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:35 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.12( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:35 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.10( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:35 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1( v 46'6 (0'0,46'6] local-lis/les=45/46 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:35 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.3( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:35 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.4( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:35 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.c( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:35 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.8( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:35 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.a( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:35 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.b( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:35 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1b( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:35 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.5( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:35 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.18( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:35 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1c( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:35 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.6( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:35 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1e( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:35 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1f( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:35 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.d( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:36 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v142: 228 pgs: 31 unknown, 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:17:36 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Dec  4 05:17:36 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Dec  4 05:17:36 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Dec  4 05:17:36 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Dec  4 05:17:36 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Dec  4 05:17:36 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Dec  4 05:17:36 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Dec  4 05:17:36 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec  4 05:17:36 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec  4 05:17:36 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec  4 05:17:36 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Dec  4 05:17:36 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Dec  4 05:17:36 np0005545273 ceph-mgr[75651]: [progress INFO root] update: starting ev b9da7383-50a7-406d-bd4b-413a4454a4a6 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec  4 05:17:36 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 57 pg[10.0( v 53'18 (0'0,53'18] local-lis/les=49/50 n=9 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=8.880224228s) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 53'17 mlcod 53'17 active pruub 128.044174194s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:36 np0005545273 ceph-mgr[75651]: [progress INFO root] complete: finished ev 1d71047a-4d95-4992-ae10-32ab2e31248c (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec  4 05:17:36 np0005545273 ceph-mgr[75651]: [progress INFO root] Completed event 1d71047a-4d95-4992-ae10-32ab2e31248c (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Dec  4 05:17:36 np0005545273 ceph-mgr[75651]: [progress INFO root] complete: finished ev b15e31a2-b6d9-4e61-b1ee-d435defc20a6 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec  4 05:17:36 np0005545273 ceph-mgr[75651]: [progress INFO root] Completed event b15e31a2-b6d9-4e61-b1ee-d435defc20a6 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Dec  4 05:17:36 np0005545273 ceph-mgr[75651]: [progress INFO root] complete: finished ev 093b82ae-26b3-44c2-a36c-c3612113336c (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec  4 05:17:36 np0005545273 ceph-mgr[75651]: [progress INFO root] Completed event 093b82ae-26b3-44c2-a36c-c3612113336c (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Dec  4 05:17:36 np0005545273 ceph-mgr[75651]: [progress INFO root] complete: finished ev b9da7383-50a7-406d-bd4b-413a4454a4a6 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec  4 05:17:36 np0005545273 ceph-mgr[75651]: [progress INFO root] Completed event b9da7383-50a7-406d-bd4b-413a4454a4a6 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Dec  4 05:17:36 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 57 pg[10.0( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=8.880224228s) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 53'17 mlcod 0'0 unknown pruub 128.044174194s@ mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.0( v 53'483 (0'0,53'483] local-lis/les=47/48 n=210 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=14.854414940s) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 53'482 mlcod 53'482 active pruub 142.611846924s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.14( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.15( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:36 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec  4 05:17:36 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Dec  4 05:17:36 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Dec  4 05:17:36 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Dec  4 05:17:36 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec  4 05:17:36 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec  4 05:17:36 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.16( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.17( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.10( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.1( v 46'6 (0'0,46'6] local-lis/les=55/57 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.2( v 46'6 (0'0,46'6] local-lis/les=55/57 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.3( v 46'6 (0'0,46'6] local-lis/les=55/57 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.d( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.e( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.8( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.a( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.f( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.b( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.c( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.9( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.0( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 46'5 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.7( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.6( v 46'6 (0'0,46'6] local-lis/les=55/57 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.5( v 46'6 (0'0,46'6] local-lis/les=55/57 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.4( v 46'6 (0'0,46'6] local-lis/les=55/57 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.1b( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.19( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.18( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.1e( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.1f( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.1d( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.1c( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.13( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.12( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.11( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.1a( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.0( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=14.854414940s) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 53'482 mlcod 0'0 unknown pruub 142.611846924s@ mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071ecc00 space 0x559006b8f140 0x0~9a clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007235180 space 0x559008917440 0x0~9a clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007189d80 space 0x559008c40240 0x0~98 clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071ec580 space 0x559008082840 0x0~9a clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071f9480 space 0x559006a5e240 0x0~9a clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007729880 space 0x559008083740 0x0~9a clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x55900723ac00 space 0x559008a09d40 0x0~6e clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071ec900 space 0x55900731f740 0x0~6e clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071ed880 space 0x5590084eb140 0x0~6e clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071ede80 space 0x55900731ee40 0x0~6e clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007302b80 space 0x5590084ed440 0x0~6e clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590072d0f00 space 0x559008a19a40 0x0~6e clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071caf80 space 0x55900891d740 0x0~9a clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071ecf80 space 0x55900685c840 0x0~6e clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590072d0b00 space 0x559008c41740 0x0~98 clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071ed180 space 0x559007376540 0x0~6e clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590072cbe80 space 0x559006a5eb40 0x0~9a clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007214780 space 0x559008917740 0x0~6e clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071ec500 space 0x559008478b40 0x0~6e clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071ecd80 space 0x559007353140 0x0~6e clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007728300 space 0x559008916540 0x0~6e clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590072d0400 space 0x559007355140 0x0~6e clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007234e00 space 0x559008a09440 0x0~6e clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071ec700 space 0x559008478240 0x0~6e clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007218b80 space 0x5590088c4e40 0x0~98 clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007234a00 space 0x559008a20240 0x0~6e clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007218000 space 0x559008916b40 0x0~9a clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007218f00 space 0x559008c3e240 0x0~98 clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559006c2b980 space 0x5590084edd40 0x0~9a clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559006c2b880 space 0x55900892b140 0x0~6e clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007234d00 space 0x5590088fcb40 0x0~9a clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071f8c00 space 0x559007355a40 0x0~6e clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007214500 space 0x5590088cd440 0x0~9a clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x55900723a080 space 0x559008919d40 0x0~9a clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071f8580 space 0x5590084ea840 0x0~6e clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007218c80 space 0x559007377740 0x0~98 clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071eda80 space 0x5590084eba40 0x0~6e clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007184780 space 0x559008c3f440 0x0~98 clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007177380 space 0x559006b8eb40 0x0~9a clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071f9b00 space 0x559008082e40 0x0~9a clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007728500 space 0x5590084ecb40 0x0~6e clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007234080 space 0x559008a20b40 0x0~6e clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007155880 space 0x559008082240 0x0~9a clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007728880 space 0x559007649740 0x0~6e clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590072cae00 space 0x5590088ee840 0x0~9a clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007184900 space 0x5590088ef440 0x0~9a clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007215380 space 0x55900891c540 0x0~98 clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071f9180 space 0x559007353d40 0x0~6e clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007155900 space 0x559008478840 0x0~9a clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559006c2a080 space 0x559008479d40 0x0~6e clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071f9300 space 0x5590088cda40 0x0~9a clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559006c2a800 space 0x559008479440 0x0~6e clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007184980 space 0x5590088e1140 0x0~9a clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071f8a00 space 0x5590084ec240 0x0~6e clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007218780 space 0x559006a5fa40 0x0~9a clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590071ed380 space 0x559007376e40 0x0~6e clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x5590072fac00 space 0x559007354840 0x0~6e clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007218e80 space 0x559008919140 0x0~98 clean)
Dec  4 05:17:36 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x559006b44240) split_cache   moving buffer(0x559007214580 space 0x559008916e40 0x0~6e clean)
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Dec  4 05:17:37 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Dec  4 05:17:37 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Dec  4 05:17:37 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.1b( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.b( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.d( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.a( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.13( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.12( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.11( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.1e( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.10( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.1f( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.1d( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.1c( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.1a( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.19( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.18( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.7( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.6( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.4( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.8( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.f( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.5( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.9( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.e( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.c( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.1( v 53'18 (0'0,53'18] local-lis/les=49/50 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.2( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.3( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.14( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.15( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.15( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.16( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.1b( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.17( v 53'18 lc 0'0 (0'0,53'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.17( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.14( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.16( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.2( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.d( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.c( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.f( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.d( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.9( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.a( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.b( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.10( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.12( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.1f( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.11( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.1d( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.1c( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.19( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.1a( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.1e( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.13( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.18( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.6( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.7( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.f( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.4( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.b( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.3( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.8( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.e( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.a( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.6( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.7( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.9( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.e( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.4( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.5( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.11( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.1( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.8( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.0( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 53'17 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.5( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1a( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.18( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.2( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.19( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1e( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1f( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1d( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1c( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.12( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.13( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.10( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1b( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.17( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.14( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.0( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 53'482 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.c( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.15( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.3( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.17( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.16( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 58 pg[10.14( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.c( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.d( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.f( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.9( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.b( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.e( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.a( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.7( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.6( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.3( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.4( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.5( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.8( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1a( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.11( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.19( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.2( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1d( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1c( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.18( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.13( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.12( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1b( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.10( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:37 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1e( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:38 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.b scrub starts
Dec  4 05:17:38 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.b scrub ok
Dec  4 05:17:38 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v145: 290 pgs: 1 peering, 62 unknown, 227 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:17:38 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Dec  4 05:17:38 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Dec  4 05:17:38 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Dec  4 05:17:38 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Dec  4 05:17:38 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Dec  4 05:17:38 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec  4 05:17:38 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Dec  4 05:17:38 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Dec  4 05:17:38 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Dec  4 05:17:39 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[11.0( empty local-lis/les=51/52 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=8.693107605s) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active pruub 138.647598267s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:39 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[11.0( empty local-lis/les=51/52 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=8.693107605s) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown pruub 138.647598267s@ mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:17:39 np0005545273 ceph-mgr[75651]: [progress INFO root] Writing back 16 completed events
Dec  4 05:17:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  4 05:17:39 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:17:39 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Dec  4 05:17:39 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Dec  4 05:17:40 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Dec  4 05:17:40 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec  4 05:17:40 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:17:40 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Dec  4 05:17:40 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.17( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.16( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.15( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.14( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.13( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.2( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.1( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.d( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.e( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.b( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.f( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.9( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.c( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.8( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.a( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.3( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.4( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.5( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.6( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.7( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.18( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.1a( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.1b( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.1c( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.1d( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.1e( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.1f( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.10( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.11( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.12( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.19( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.16( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.15( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.17( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.13( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.14( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.2( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.0( empty local-lis/les=59/60 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.b( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.e( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.d( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.f( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.c( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.a( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.9( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.3( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.4( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.5( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.8( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.6( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.7( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.1( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.18( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.1c( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.1b( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.1d( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.10( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.1e( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.12( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.1a( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.19( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.11( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:40 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[11.1f( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:40 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.b scrub starts
Dec  4 05:17:40 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.b scrub ok
Dec  4 05:17:40 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v148: 321 pgs: 1 peering, 93 unknown, 227 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:17:41 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Dec  4 05:17:41 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Dec  4 05:17:42 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Dec  4 05:17:42 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Dec  4 05:17:42 np0005545273 python3[99244]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:17:42 np0005545273 podman[99245]: 2025-12-04 10:17:42.597985538 +0000 UTC m=+0.047414883 container create a928d87207c8dad0c3ec054f98f1dac034ac558107d7e69e0c6cb7844ce8d9a7 (image=quay.io/ceph/ceph:v20, name=eager_leavitt, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  4 05:17:42 np0005545273 systemd[1]: Started libpod-conmon-a928d87207c8dad0c3ec054f98f1dac034ac558107d7e69e0c6cb7844ce8d9a7.scope.
Dec  4 05:17:42 np0005545273 podman[99245]: 2025-12-04 10:17:42.579218732 +0000 UTC m=+0.028648107 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:17:42 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:17:42 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a56afa357b603f8a629d2ba744504ca1a73d6456fdefb6f68557f73a37c766e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:17:42 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a56afa357b603f8a629d2ba744504ca1a73d6456fdefb6f68557f73a37c766e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:17:42 np0005545273 podman[99245]: 2025-12-04 10:17:42.702161396 +0000 UTC m=+0.151590741 container init a928d87207c8dad0c3ec054f98f1dac034ac558107d7e69e0c6cb7844ce8d9a7 (image=quay.io/ceph/ceph:v20, name=eager_leavitt, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:17:42 np0005545273 podman[99245]: 2025-12-04 10:17:42.709824972 +0000 UTC m=+0.159254317 container start a928d87207c8dad0c3ec054f98f1dac034ac558107d7e69e0c6cb7844ce8d9a7 (image=quay.io/ceph/ceph:v20, name=eager_leavitt, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:17:42 np0005545273 podman[99245]: 2025-12-04 10:17:42.713549083 +0000 UTC m=+0.162978438 container attach a928d87207c8dad0c3ec054f98f1dac034ac558107d7e69e0c6cb7844ce8d9a7 (image=quay.io/ceph/ceph:v20, name=eager_leavitt, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:17:42 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v149: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:17:42 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  4 05:17:42 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec  4 05:17:42 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  4 05:17:42 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec  4 05:17:42 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Dec  4 05:17:42 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Dec  4 05:17:42 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  4 05:17:42 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec  4 05:17:42 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Dec  4 05:17:42 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Dec  4 05:17:42 np0005545273 eager_leavitt[99260]: could not fetch user info: no user info saved
Dec  4 05:17:42 np0005545273 systemd[1]: libpod-a928d87207c8dad0c3ec054f98f1dac034ac558107d7e69e0c6cb7844ce8d9a7.scope: Deactivated successfully.
Dec  4 05:17:42 np0005545273 podman[99245]: 2025-12-04 10:17:42.91819422 +0000 UTC m=+0.367623585 container died a928d87207c8dad0c3ec054f98f1dac034ac558107d7e69e0c6cb7844ce8d9a7 (image=quay.io/ceph/ceph:v20, name=eager_leavitt, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  4 05:17:42 np0005545273 systemd[1]: var-lib-containers-storage-overlay-9a56afa357b603f8a629d2ba744504ca1a73d6456fdefb6f68557f73a37c766e-merged.mount: Deactivated successfully.
Dec  4 05:17:42 np0005545273 podman[99245]: 2025-12-04 10:17:42.958229303 +0000 UTC m=+0.407658638 container remove a928d87207c8dad0c3ec054f98f1dac034ac558107d7e69e0c6cb7844ce8d9a7 (image=quay.io/ceph/ceph:v20, name=eager_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  4 05:17:42 np0005545273 systemd[1]: libpod-conmon-a928d87207c8dad0c3ec054f98f1dac034ac558107d7e69e0c6cb7844ce8d9a7.scope: Deactivated successfully.
Dec  4 05:17:43 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Dec  4 05:17:43 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec  4 05:17:43 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec  4 05:17:43 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Dec  4 05:17:43 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec  4 05:17:43 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  4 05:17:43 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  4 05:17:43 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec  4 05:17:43 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  4 05:17:43 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Dec  4 05:17:43 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.14( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.842870712s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.759933472s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.14( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.842812538s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.759933472s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851659775s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 active pruub 144.768844604s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851603508s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 144.768844604s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.15( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.842667580s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.759948730s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.15( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.842653275s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.759948730s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.b( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851858139s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 active pruub 136.178298950s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.15( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.972802162s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.890274048s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.15( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.972784996s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.890274048s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.17( v 60'484 (0'0,60'484] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851296425s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 53'483 active pruub 144.768844604s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.14( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.974140167s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.891738892s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.17( v 60'484 (0'0,60'484] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851257324s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 53'483 unknown NOTIFY pruub 144.768844604s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.14( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.974126816s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.891738892s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.b( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851811409s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 unknown NOTIFY pruub 136.178298950s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.11( v 60'485 (0'0,60'485] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.854658127s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'484 lcod 60'484 active pruub 144.772491455s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.10( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.848490715s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.766357422s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.11( v 60'485 (0'0,60'485] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.854634285s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'484 lcod 60'484 unknown NOTIFY pruub 144.772491455s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.10( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.848474503s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.766357422s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.2( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973777771s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.891784668s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.1( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973707199s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.891738892s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.2( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973757744s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.891784668s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.1( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973693848s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.891738892s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.2( v 46'6 (0'0,46'6] local-lis/les=55/57 n=1 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.848252296s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.766403198s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.d( v 60'22 (0'0,60'22] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851372719s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'22 lcod 60'21 active pruub 136.178115845s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.d( v 60'22 (0'0,60'22] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851303101s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'22 lcod 60'21 unknown NOTIFY pruub 136.178115845s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.12( v 60'19 (0'0,60'19] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851553917s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 53'18 active pruub 136.178405762s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.3( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.854074478s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 active pruub 144.772399902s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.3( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.854035378s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 144.772399902s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.c( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.848249435s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.766647339s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.f( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973383904s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.891845703s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.c( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.848228455s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.766647339s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.2( v 46'6 (0'0,46'6] local-lis/les=55/57 n=1 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.848234177s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.766403198s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.f( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973370552s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.891845703s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.17( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973180771s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.891738892s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.e( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973235130s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.891799927s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.e( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973223686s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.891799927s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.d( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973237038s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.891830444s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.d( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.847897530s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.766525269s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.d( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973223686s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.891830444s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.17( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973142624s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.891738892s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.d( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.847876549s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.766525269s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.e( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.847798347s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.766540527s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.f( v 60'484 (0'0,60'484] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.853405952s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 53'483 active pruub 144.772171021s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.e( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.847778320s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.766540527s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.f( v 60'484 (0'0,60'484] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.853387833s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 53'483 unknown NOTIFY pruub 144.772171021s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.b( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.972988129s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.891815186s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.b( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.972967148s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.891815186s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.d( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.853140831s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 active pruub 144.772018433s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.13( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851790428s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 active pruub 136.178665161s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.12( v 60'19 (0'0,60'19] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851522446s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 53'18 unknown NOTIFY pruub 136.178405762s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.13( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851760864s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 unknown NOTIFY pruub 136.178665161s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.1e( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851792336s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 active pruub 136.178634644s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.1e( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851577759s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 unknown NOTIFY pruub 136.178634644s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.11( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851357460s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 active pruub 136.178497314s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.11( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851336479s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 unknown NOTIFY pruub 136.178497314s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.10( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851150513s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 active pruub 136.178344727s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.10( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851134300s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 unknown NOTIFY pruub 136.178344727s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.1a( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851096153s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 active pruub 136.178604126s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.1a( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851073265s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 unknown NOTIFY pruub 136.178604126s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.19( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851023674s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 active pruub 136.178588867s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.19( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851001740s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 unknown NOTIFY pruub 136.178588867s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.7( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851005554s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 active pruub 136.178741455s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.7( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.850990295s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 unknown NOTIFY pruub 136.178741455s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.9( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.853290558s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 active pruub 144.772216797s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.9( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.972928047s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.891876221s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.9( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.972913742s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.891876221s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.9( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.853256226s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 144.772216797s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.b( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.853258133s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 active pruub 144.772262573s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.b( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.853247643s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 144.772262573s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.d( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.853122711s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 144.772018433s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.f( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.847470284s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.766601562s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.f( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.847447395s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.766601562s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.8( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.972467422s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.892028809s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.4( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.850154877s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 active pruub 136.178802490s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.4( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.850131989s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 unknown NOTIFY pruub 136.178802490s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.8( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.850015640s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 active pruub 136.178833008s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.8( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.849994659s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 unknown NOTIFY pruub 136.178833008s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.f( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.849793434s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 active pruub 136.178771973s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.f( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.849775314s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 unknown NOTIFY pruub 136.178771973s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.9( v 60'22 (0'0,60'22] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.850012779s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'22 lcod 60'21 active pruub 136.179077148s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.6( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.849605560s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 active pruub 136.178710938s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.9( v 60'22 (0'0,60'22] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.849981308s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'22 lcod 60'21 unknown NOTIFY pruub 136.179077148s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.8( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.972025871s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.892028809s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.b( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.845895767s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.766632080s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.b( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.845875740s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.766632080s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.6( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.849592209s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 unknown NOTIFY pruub 136.178710938s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.e( v 60'22 (0'0,60'22] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.849787712s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'22 lcod 60'21 active pruub 136.179092407s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.1( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.849862099s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 active pruub 136.179183960s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.1( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.849843979s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 unknown NOTIFY pruub 136.179183960s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.e( v 60'22 (0'0,60'22] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.849713326s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'22 lcod 60'21 unknown NOTIFY pruub 136.179092407s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.2( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.849770546s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 active pruub 136.179244995s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.2( v 53'18 (0'0,53'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.849744797s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 unknown NOTIFY pruub 136.179244995s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.14( v 60'22 (0'0,60'22] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851269722s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=60'22 lcod 60'21 active pruub 136.180892944s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.14( v 60'22 (0'0,60'22] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851244926s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=60'22 lcod 60'21 unknown NOTIFY pruub 136.180892944s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.15( v 60'22 (0'0,60'22] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.850983620s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'22 lcod 60'21 active pruub 136.180740356s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.15( v 60'22 (0'0,60'22] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.850947380s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'22 lcod 60'21 unknown NOTIFY pruub 136.180740356s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.16( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.851053238s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 active pruub 136.180862427s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.16( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.850997925s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 unknown NOTIFY pruub 136.180862427s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[8.15( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.17( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.850710869s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 active pruub 136.180831909s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[10.17( v 53'18 (0'0,53'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.850689888s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 unknown NOTIFY pruub 136.180831909s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[11.15( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.3( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.970975876s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.891983032s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.3( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.970958710s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.891983032s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.9( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.845645905s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.766677856s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.9( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.845627785s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.766677856s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.1( v 60'485 (0'0,60'485] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.850890160s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'484 lcod 60'484 active pruub 144.772354126s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.4( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.970510483s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.891998291s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[11.2( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.1( v 60'485 (0'0,60'485] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.850800514s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'484 lcod 60'484 unknown NOTIFY pruub 144.772354126s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.6( v 46'6 (0'0,46'6] local-lis/les=55/57 n=1 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.845039368s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.766784668s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.6( v 46'6 (0'0,46'6] local-lis/les=55/57 n=1 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.845014572s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.766784668s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[8.2( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.7( v 60'485 (0'0,60'485] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.850543022s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'484 lcod 60'484 active pruub 144.772384644s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.6( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.970182419s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.892028809s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.6( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.970166206s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.892028809s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.7( v 60'485 (0'0,60'485] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.850509644s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'484 lcod 60'484 unknown NOTIFY pruub 144.772384644s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.4( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.969996452s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.891998291s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.5( v 60'485 (0'0,60'485] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.850395203s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'484 lcod 60'484 active pruub 144.772445679s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.4( v 46'6 (0'0,46'6] local-lis/les=55/57 n=1 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.844781876s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.766845703s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.18( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973319054s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.895385742s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.4( v 46'6 (0'0,46'6] local-lis/les=55/57 n=1 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.844763756s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.766845703s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.18( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973299980s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.895385742s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.5( v 60'485 (0'0,60'485] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.850362778s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'484 lcod 60'484 unknown NOTIFY pruub 144.772445679s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[11.d( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.1b( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.844604492s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.766876221s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.1a( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973101616s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.895401001s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.1b( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973128319s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.895431519s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.1b( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.844582558s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.766876221s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.19( v 60'485 (0'0,60'485] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.850190163s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'484 lcod 60'484 active pruub 144.772506714s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.1b( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973111153s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.895431519s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.1a( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973078728s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.895401001s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.19( v 60'485 (0'0,60'485] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.850171089s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'484 lcod 60'484 unknown NOTIFY pruub 144.772506714s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[8.d( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.18( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.844511986s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.766952515s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.18( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.844500542s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.766952515s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.1c( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.972947121s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.895416260s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.1c( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.972927094s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.895416260s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.1f( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.844467163s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.767059326s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.849922180s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 active pruub 144.772506714s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.849894524s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 144.772506714s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.1e( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.972844124s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.895492554s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.1f( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.844441414s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.767059326s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.1e( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.972822189s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.895492554s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.1f( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973083496s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.895812988s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.1f( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.973070145s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.895812988s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.1d( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.849761009s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 active pruub 144.772552490s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[11.b( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[11.9( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.1c( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.844330788s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.767135620s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.1d( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.849743843s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 144.772552490s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.1c( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.844311714s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.767135620s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.1d( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.844038963s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.767089844s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.10( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.972417831s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.895492554s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.1d( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.844010353s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.767089844s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.10( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.972396851s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.895492554s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.11( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.972160339s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.895507812s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.11( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.972143173s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.895507812s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.19( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.972132683s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.895599365s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.19( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.972115517s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.895599365s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.1b( v 60'484 (0'0,60'484] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.854270935s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 53'483 active pruub 144.777877808s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[11.8( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[11.3( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[9.11( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.12( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.843025208s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.767227173s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.1b( v 60'484 (0'0,60'484] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.854243279s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 53'483 unknown NOTIFY pruub 144.777877808s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.12( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.842938423s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.767227173s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.13( v 60'484 (0'0,60'484] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.853326797s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 53'483 active pruub 144.777832031s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[9.13( v 60'484 (0'0,60'484] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=10.853282928s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 53'483 unknown NOTIFY pruub 144.777832031s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[8.10( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[10.12( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[8.4( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.12( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.970718384s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 146.895523071s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.11( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.842124939s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.767242432s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[10.9( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.11( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.842099190s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.767242432s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[10.11( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.1a( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.841900826s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 143.767257690s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[8.b( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[8.1a( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=9.841798782s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 143.767257690s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[10.10( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[11.18( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[8.1b( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[10.8( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[11.12( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=12.970474243s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 146.895523071s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[10.13( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[11.1b( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[11.1a( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[9.b( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[11.1c( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[11.14( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[10.15( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[11.1e( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[10.4( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[8.9( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[10.b( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[11.1f( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[9.17( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[8.1c( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[10.19( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[10.7( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[11.11( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[10.1a( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[10.f( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[9.9( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[10.6( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[10.17( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[8.12( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[10.2( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[10.14( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[10.d( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[8.f( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[8.11( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 61 pg[11.12( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[11.e( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[8.e( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[9.f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[8.c( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[11.f( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[9.d( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[10.e( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[11.1( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[9.3( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[10.1e( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[8.14( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[9.15( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[11.17( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[10.16( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[10.1( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[9.1( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[8.6( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[9.7( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[11.4( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[9.5( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[9.19( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[8.18( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[8.1f( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[9.1d( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[11.6( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[8.1d( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[11.10( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[11.19( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[9.1b( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[9.13( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 61 pg[8.1a( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.d scrub starts
Dec  4 05:17:43 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.d scrub ok
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Dec  4 05:17:43 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Dec  4 05:17:43 np0005545273 python3[99382]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:17:43 np0005545273 podman[99383]: 2025-12-04 10:17:43.925323931 +0000 UTC m=+0.053841039 container create 84a1546f1f92008385ed61078a764ad493bd5ea1a07f089c43536e3c2817c478 (image=quay.io/ceph/ceph:v20, name=lucid_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:17:43 np0005545273 systemd[1]: Started libpod-conmon-84a1546f1f92008385ed61078a764ad493bd5ea1a07f089c43536e3c2817c478.scope.
Dec  4 05:17:43 np0005545273 podman[99383]: 2025-12-04 10:17:43.897235558 +0000 UTC m=+0.025752756 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  4 05:17:43 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:17:43 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5e1438c2945060fa43fafabbc584189e4856eeb86b8261c2288b06012d798d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:17:43 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5e1438c2945060fa43fafabbc584189e4856eeb86b8261c2288b06012d798d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:17:44 np0005545273 podman[99383]: 2025-12-04 10:17:44.012838825 +0000 UTC m=+0.141356023 container init 84a1546f1f92008385ed61078a764ad493bd5ea1a07f089c43536e3c2817c478 (image=quay.io/ceph/ceph:v20, name=lucid_goldstine, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:17:44 np0005545273 podman[99383]: 2025-12-04 10:17:44.021831103 +0000 UTC m=+0.150348251 container start 84a1546f1f92008385ed61078a764ad493bd5ea1a07f089c43536e3c2817c478 (image=quay.io/ceph/ceph:v20, name=lucid_goldstine, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:17:44 np0005545273 podman[99383]: 2025-12-04 10:17:44.026069255 +0000 UTC m=+0.154586403 container attach 84a1546f1f92008385ed61078a764ad493bd5ea1a07f089c43536e3c2817c478 (image=quay.io/ceph/ceph:v20, name=lucid_goldstine, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec  4 05:17:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Dec  4 05:17:44 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  4 05:17:44 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  4 05:17:44 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec  4 05:17:44 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  4 05:17:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Dec  4 05:17:44 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.17( v 60'484 (0'0,60'484] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 53'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.17( v 60'484 (0'0,60'484] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 53'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.11( v 60'485 (0'0,60'485] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=60'484 lcod 60'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.11( v 60'485 (0'0,60'485] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=60'484 lcod 60'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.d( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.d( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.f( v 60'484 (0'0,60'484] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 53'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.f( v 60'484 (0'0,60'484] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 53'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.9( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.9( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.b( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.b( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.3( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.15( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.15( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.11( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.d( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.11( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.d( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.3( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.1( v 60'485 (0'0,60'485] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=60'484 lcod 60'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.1( v 60'485 (0'0,60'485] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=60'484 lcod 60'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.7( v 60'485 (0'0,60'485] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=60'484 lcod 60'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.7( v 60'485 (0'0,60'485] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=60'484 lcod 60'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.5( v 60'485 (0'0,60'485] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=60'484 lcod 60'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.5( v 60'485 (0'0,60'485] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=60'484 lcod 60'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.19( v 60'485 (0'0,60'485] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=60'484 lcod 60'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.19( v 60'485 (0'0,60'485] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=60'484 lcod 60'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.19( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.19( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.1d( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.1d( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.1d( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.1d( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.13( v 60'484 (0'0,60'484] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 53'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.3( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.13( v 60'484 (0'0,60'484] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 53'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.3( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.1( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.1b( v 60'484 (0'0,60'484] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 53'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[9.1b( v 60'484 (0'0,60'484] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 53'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.1( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.1b( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.1b( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.9( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.9( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[10.14( v 60'22 lc 60'21 (0'0,60'22] local-lis/les=61/62 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=60'22 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.b( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.b( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.5( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.5( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.13( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.13( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[8.14( v 46'6 (0'0,46'6] local-lis/les=61/62 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] r=-1 lpr=62 pi=[57,62)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[10.12( v 60'19 lc 53'17 (0'0,60'19] local-lis/les=61/62 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=60'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[10.f( v 53'18 (0'0,53'18] local-lis/les=61/62 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[10.b( v 53'18 (0'0,53'18] local-lis/les=61/62 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[10.2( v 53'18 (0'0,53'18] local-lis/les=61/62 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[11.17( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[11.14( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[11.f( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[8.c( v 46'6 (0'0,46'6] local-lis/les=61/62 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[11.e( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[8.e( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=61/62 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=46'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[10.1( v 53'18 (0'0,53'18] local-lis/les=61/62 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[8.1f( v 46'6 (0'0,46'6] local-lis/les=61/62 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[8.1d( v 46'6 (0'0,46'6] local-lis/les=61/62 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[8.1a( v 46'6 (0'0,46'6] local-lis/les=61/62 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[10.16( v 53'18 (0'0,53'18] local-lis/les=61/62 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[11.19( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[8.18( v 46'6 (0'0,46'6] local-lis/les=61/62 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[8.10( v 46'6 (0'0,46'6] local-lis/les=61/62 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[10.6( v 53'18 (0'0,53'18] local-lis/les=61/62 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[10.19( v 53'18 (0'0,53'18] local-lis/les=61/62 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[10.1a( v 53'18 (0'0,53'18] local-lis/les=61/62 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[8.15( v 46'6 (0'0,46'6] local-lis/les=61/62 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[8.2( v 46'6 (0'0,46'6] local-lis/les=61/62 n=1 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[11.2( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[11.b( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[8.d( v 46'6 (0'0,46'6] local-lis/les=61/62 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[11.1f( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[8.1c( v 46'6 (0'0,46'6] local-lis/les=61/62 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[11.9( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[11.1a( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[11.8( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[11.12( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[11.3( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[8.11( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=61/62 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=46'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[8.12( v 46'6 (0'0,46'6] local-lis/les=61/62 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[11.15( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[11.11( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[11.1e( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[8.4( v 46'6 (0'0,46'6] local-lis/les=61/62 n=1 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[11.1c( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[11.18( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[11.1b( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[8.1b( v 46'6 (0'0,46'6] local-lis/les=61/62 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 62 pg[11.d( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[8.f( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=61/62 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=46'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[10.1e( v 53'18 (0'0,53'18] local-lis/les=61/62 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[11.6( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[10.e( v 60'22 lc 60'21 (0'0,60'22] local-lis/les=61/62 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=60'22 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[10.d( v 60'22 lc 60'21 (0'0,60'22] local-lis/les=61/62 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=60'22 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[10.7( v 53'18 (0'0,53'18] local-lis/les=61/62 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[8.9( v 46'6 (0'0,46'6] local-lis/les=61/62 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[8.6( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=61/62 n=1 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=46'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[10.15( v 60'22 lc 60'21 (0'0,60'22] local-lis/les=61/62 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=60'22 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[10.4( v 53'18 (0'0,53'18] local-lis/les=61/62 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[10.11( v 53'18 (0'0,53'18] local-lis/les=61/62 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[10.13( v 53'18 (0'0,53'18] local-lis/les=61/62 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[10.10( v 53'18 (0'0,53'18] local-lis/les=61/62 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[11.1( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[11.4( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[10.9( v 60'22 lc 60'21 (0'0,60'22] local-lis/les=61/62 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=60'22 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[11.10( empty local-lis/les=61/62 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[10.8( v 53'18 (0'0,53'18] local-lis/les=61/62 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[10.17( v 53'18 (0'0,53'18] local-lis/les=61/62 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=53'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 62 pg[8.b( v 46'6 (0'0,46'6] local-lis/les=61/62 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:17:44 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Dec  4 05:17:44 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Dec  4 05:17:44 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v152: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:17:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Dec  4 05:17:44 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Dec  4 05:17:44 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Dec  4 05:17:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Dec  4 05:17:45 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Dec  4 05:17:45 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec  4 05:17:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Dec  4 05:17:45 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Dec  4 05:17:45 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[9.b( v 53'483 (0'0,53'483] local-lis/les=62/63 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:45 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[9.11( v 60'485 (0'0,60'485] local-lis/les=62/63 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=60'485 lcod 60'484 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:45 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[9.17( v 60'484 (0'0,60'484] local-lis/les=62/63 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=60'484 lcod 53'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:45 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[9.9( v 53'483 (0'0,53'483] local-lis/les=62/63 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:45 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[9.d( v 53'483 (0'0,53'483] local-lis/les=62/63 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:45 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[9.5( v 60'485 (0'0,60'485] local-lis/les=62/63 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=60'485 lcod 60'484 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:45 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[9.1d( v 53'483 (0'0,53'483] local-lis/les=62/63 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:45 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[9.f( v 60'484 (0'0,60'484] local-lis/les=62/63 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=60'484 lcod 53'483 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:45 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[9.13( v 60'484 (0'0,60'484] local-lis/les=62/63 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=60'484 lcod 53'483 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:45 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[9.1( v 60'485 (0'0,60'485] local-lis/les=62/63 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=60'485 lcod 60'484 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:45 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[9.7( v 60'485 (0'0,60'485] local-lis/les=62/63 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=60'485 lcod 60'484 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:45 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=62/63 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:45 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=62/63 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:45 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[9.1b( v 60'484 (0'0,60'484] local-lis/les=62/63 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=60'484 lcod 53'483 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:45 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[9.19( v 60'485 (0'0,60'485] local-lis/les=62/63 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=60'485 lcod 60'484 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:45 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[9.3( v 53'483 (0'0,53'483] local-lis/les=62/63 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=62) [0]/[1] async=[0] r=0 lpr=62 pi=[57,62)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]: {
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:    "user_id": "openstack",
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:    "display_name": "openstack",
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:    "email": "",
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:    "suspended": 0,
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:    "max_buckets": 1000,
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:    "subusers": [],
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:    "keys": [
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:        {
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:            "user": "openstack",
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:            "access_key": "MV558CNQ0495KIP242HY",
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:            "secret_key": "3t3T6cs6kVOAPfJDg1f7fdophmZLswl1bIUyAmXg",
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:            "active": true,
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:            "create_date": "2025-12-04T10:17:45.801730Z"
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:        }
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:    ],
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:    "swift_keys": [],
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:    "caps": [],
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:    "op_mask": "read, write, delete",
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:    "default_placement": "",
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:    "default_storage_class": "",
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:    "placement_tags": [],
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:    "bucket_quota": {
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:        "enabled": false,
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:        "check_on_raw": false,
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:        "max_size": -1,
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:        "max_size_kb": 0,
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:        "max_objects": -1
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:    },
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:    "user_quota": {
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:        "enabled": false,
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:        "check_on_raw": false,
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:        "max_size": -1,
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:        "max_size_kb": 0,
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:        "max_objects": -1
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:    },
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:    "temp_url_keys": [],
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:    "type": "rgw",
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:    "mfa_ids": [],
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:    "account_id": "",
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:    "path": "/",
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:    "create_date": "2025-12-04T10:17:45.800970Z",
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:    "tags": [],
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]:    "group_ids": []
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]: }
Dec  4 05:17:45 np0005545273 lucid_goldstine[99399]: 
Dec  4 05:17:45 np0005545273 systemd[1]: libpod-84a1546f1f92008385ed61078a764ad493bd5ea1a07f089c43536e3c2817c478.scope: Deactivated successfully.
Dec  4 05:17:45 np0005545273 podman[99383]: 2025-12-04 10:17:45.849309618 +0000 UTC m=+1.977826726 container died 84a1546f1f92008385ed61078a764ad493bd5ea1a07f089c43536e3c2817c478 (image=quay.io/ceph/ceph:v20, name=lucid_goldstine, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  4 05:17:45 np0005545273 systemd[1]: var-lib-containers-storage-overlay-1f5e1438c2945060fa43fafabbc584189e4856eeb86b8261c2288b06012d798d-merged.mount: Deactivated successfully.
Dec  4 05:17:45 np0005545273 podman[99383]: 2025-12-04 10:17:45.909087928 +0000 UTC m=+2.037605026 container remove 84a1546f1f92008385ed61078a764ad493bd5ea1a07f089c43536e3c2817c478 (image=quay.io/ceph/ceph:v20, name=lucid_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:17:45 np0005545273 systemd[1]: libpod-conmon-84a1546f1f92008385ed61078a764ad493bd5ea1a07f089c43536e3c2817c478.scope: Deactivated successfully.
Dec  4 05:17:46 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Dec  4 05:17:46 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec  4 05:17:46 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Dec  4 05:17:46 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Dec  4 05:17:46 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.11( v 63'487 (0'0,63'487] local-lis/les=62/63 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.502585411s) [0] async=[0] r=-1 lpr=64 pi=[57,64)/1 crt=63'486 lcod 63'486 active pruub 152.477401733s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:46 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.11( v 63'487 (0'0,63'487] local-lis/les=62/63 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.502213478s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=63'486 lcod 63'486 unknown NOTIFY pruub 152.477401733s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:46 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=62/63 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.506839752s) [0] async=[0] r=-1 lpr=64 pi=[57,64)/1 crt=60'484 lcod 60'484 active pruub 152.482147217s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:46 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.9( v 53'483 (0'0,53'483] local-lis/les=62/63 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.506290436s) [0] async=[0] r=-1 lpr=64 pi=[57,64)/1 crt=53'483 lcod 0'0 active pruub 152.481658936s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:46 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=62/63 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.506756783s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=60'484 lcod 60'484 unknown NOTIFY pruub 152.482147217s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:46 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.9( v 53'483 (0'0,53'483] local-lis/les=62/63 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.506227493s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 152.481658936s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:46 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.d( v 53'483 (0'0,53'483] local-lis/les=62/63 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.506434441s) [0] async=[0] r=-1 lpr=64 pi=[57,64)/1 crt=53'483 lcod 0'0 active pruub 152.481796265s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:46 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.b( v 53'483 (0'0,53'483] local-lis/les=62/63 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.501778603s) [0] async=[0] r=-1 lpr=64 pi=[57,64)/1 crt=53'483 lcod 0'0 active pruub 152.477355957s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:46 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.b( v 53'483 (0'0,53'483] local-lis/les=62/63 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.501735687s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 152.477355957s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:46 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.d( v 53'483 (0'0,53'483] local-lis/les=62/63 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.506053925s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 152.481796265s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:46 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.1d( v 53'483 (0'0,53'483] local-lis/les=62/63 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.505730629s) [0] async=[0] r=-1 lpr=64 pi=[57,64)/1 crt=53'483 lcod 0'0 active pruub 152.482131958s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:46 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=62/63 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.505750656s) [0] async=[0] r=-1 lpr=64 pi=[57,64)/1 crt=60'484 lcod 60'484 active pruub 152.482177734s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:46 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.1d( v 53'483 (0'0,53'483] local-lis/les=62/63 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.505625725s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 152.482131958s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:46 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=62/63 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.505614281s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=60'484 lcod 60'484 unknown NOTIFY pruub 152.482177734s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:46 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 64 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=0/0 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 pct=0'0 crt=60'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:46 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 64 pg[9.9( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:46 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 64 pg[9.9( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:46 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 64 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=0/0 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 pct=0'0 crt=60'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:46 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 64 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=0/0 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=60'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:46 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 64 pg[9.d( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:46 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 64 pg[9.d( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:46 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 64 pg[9.11( v 63'487 (0'0,63'487] local-lis/les=0/0 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 pct=0'0 crt=63'486 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:46 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 64 pg[9.1d( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:46 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 64 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=0/0 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=60'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:46 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 64 pg[9.11( v 63'487 (0'0,63'487] local-lis/les=0/0 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=63'486 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:46 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 64 pg[9.1d( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:46 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 64 pg[9.b( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:46 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 64 pg[9.b( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:46 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Dec  4 05:17:46 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Dec  4 05:17:46 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Dec  4 05:17:46 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Dec  4 05:17:46 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v155: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s; 264 B/s, 0 objects/s recovering
Dec  4 05:17:46 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Dec  4 05:17:46 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Dec  4 05:17:47 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Dec  4 05:17:47 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec  4 05:17:47 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Dec  4 05:17:47 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Dec  4 05:17:47 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=62/63 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.507721901s) [0] async=[0] r=-1 lpr=65 pi=[57,65)/1 crt=53'483 lcod 0'0 active pruub 152.482498169s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:47 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=62/63 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.507640839s) [0] r=-1 lpr=65 pi=[57,65)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 152.482498169s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:47 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=62/63 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.506669044s) [0] async=[0] r=-1 lpr=65 pi=[57,65)/1 crt=60'484 lcod 60'484 active pruub 152.481643677s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:47 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:47 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=62/63 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.506562233s) [0] r=-1 lpr=65 pi=[57,65)/1 crt=60'484 lcod 60'484 unknown NOTIFY pruub 152.481643677s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:47 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.3( v 53'483 (0'0,53'483] local-lis/les=62/63 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.511236191s) [0] async=[0] r=-1 lpr=65 pi=[57,65)/1 crt=53'483 lcod 0'0 active pruub 152.486373901s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:47 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:47 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.3( v 53'483 (0'0,53'483] local-lis/les=62/63 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.511168480s) [0] r=-1 lpr=65 pi=[57,65)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 152.486373901s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:47 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.1b( v 63'485 (0'0,63'485] local-lis/les=0/0 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 pct=0'0 crt=60'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:47 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1( v 63'487 (0'0,63'487] local-lis/les=62/63 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.506362915s) [0] async=[0] r=-1 lpr=65 pi=[57,65)/1 crt=63'486 lcod 63'486 active pruub 152.482223511s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:47 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.1b( v 63'485 (0'0,63'485] local-lis/les=0/0 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=60'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:47 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=0/0 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 pct=0'0 crt=63'486 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:47 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=0/0 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=63'486 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:47 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.3( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:47 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.3( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:47 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:47 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:47 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=62/63 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.506306648s) [0] async=[0] r=-1 lpr=65 pi=[57,65)/1 crt=63'486 lcod 63'486 active pruub 152.482238770s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:47 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.1( v 63'487 (0'0,63'487] local-lis/les=0/0 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 pct=0'0 crt=63'486 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:47 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.1( v 63'487 (0'0,63'487] local-lis/les=0/0 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=63'486 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:47 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.5( v 63'488 (0'0,63'488] local-lis/les=62/63 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.505764008s) [0] async=[0] r=-1 lpr=65 pi=[57,65)/1 crt=63'486 lcod 63'487 active pruub 152.481842041s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:47 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=62/63 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.506172180s) [0] r=-1 lpr=65 pi=[57,65)/1 crt=63'486 lcod 63'486 unknown NOTIFY pruub 152.482238770s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:47 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.5( v 63'488 (0'0,63'488] local-lis/les=62/63 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.505682945s) [0] r=-1 lpr=65 pi=[57,65)/1 crt=63'486 lcod 63'487 unknown NOTIFY pruub 152.481842041s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:47 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=62/63 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.509919167s) [0] async=[0] r=-1 lpr=65 pi=[57,65)/1 crt=63'486 lcod 63'486 active pruub 152.486312866s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:47 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=62/63 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.509859085s) [0] r=-1 lpr=65 pi=[57,65)/1 crt=63'486 lcod 63'486 unknown NOTIFY pruub 152.486312866s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:47 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1( v 63'487 (0'0,63'487] local-lis/les=62/63 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.505632401s) [0] r=-1 lpr=65 pi=[57,65)/1 crt=63'486 lcod 63'486 unknown NOTIFY pruub 152.482223511s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:47 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=62/63 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.505712509s) [0] async=[0] r=-1 lpr=65 pi=[57,65)/1 crt=53'483 lcod 0'0 active pruub 152.482543945s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:47 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=62/63 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.505644798s) [0] r=-1 lpr=65 pi=[57,65)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 152.482543945s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:47 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=0/0 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 pct=0'0 crt=60'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:47 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=0/0 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 pct=0'0 crt=63'486 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:47 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=0/0 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=63'486 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:47 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1b( v 63'485 (0'0,63'485] local-lis/les=62/63 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.505240440s) [0] async=[0] r=-1 lpr=65 pi=[57,65)/1 crt=60'484 lcod 60'484 active pruub 152.482498169s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:47 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Dec  4 05:17:47 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1b( v 63'485 (0'0,63'485] local-lis/les=62/63 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=14.505127907s) [0] r=-1 lpr=65 pi=[57,65)/1 crt=60'484 lcod 60'484 unknown NOTIFY pruub 152.482498169s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:17:47 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.5( v 63'488 (0'0,63'488] local-lis/les=0/0 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 pct=0'0 crt=63'486 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:17:47 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.5( v 63'488 (0'0,63'488] local-lis/les=0/0 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=63'486 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:47 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=0/0 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=60'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:17:47 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.d( v 53'483 (0'0,53'483] local-lis/les=64/65 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:47 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.1d( v 53'483 (0'0,53'483] local-lis/les=64/65 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:47 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=64/65 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=63'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:47 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.9( v 53'483 (0'0,53'483] local-lis/les=64/65 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:47 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.11( v 63'487 (0'0,63'487] local-lis/les=64/65 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=63'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:47 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=64/65 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=63'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:47 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 65 pg[9.b( v 53'483 (0'0,53'483] local-lis/les=64/65 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=64) [0] r=0 lpr=64 pi=[57,64)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:48 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Dec  4 05:17:48 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Dec  4 05:17:48 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Dec  4 05:17:48 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 66 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=65/66 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=63'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:48 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec  4 05:17:48 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 66 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=65/66 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:48 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 66 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=65/66 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:48 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 66 pg[9.1b( v 63'485 (0'0,63'485] local-lis/les=65/66 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=63'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:48 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 66 pg[9.3( v 53'483 (0'0,53'483] local-lis/les=65/66 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:48 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 66 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=65/66 n=6 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=63'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:48 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 66 pg[9.1( v 63'487 (0'0,63'487] local-lis/les=65/66 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=63'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:48 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 66 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=65/66 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=63'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:48 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 66 pg[9.5( v 63'488 (0'0,63'488] local-lis/les=65/66 n=7 ec=57/47 lis/c=62/57 les/c/f=63/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=63'488 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:17:48 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Dec  4 05:17:48 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Dec  4 05:17:48 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v158: 321 pgs: 9 peering, 312 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 511 B/s wr, 64 op/s; 1.4 KiB/s, 27 objects/s recovering
Dec  4 05:17:49 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.a scrub starts
Dec  4 05:17:49 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 6.a scrub ok
Dec  4 05:17:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:17:50 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Dec  4 05:17:50 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Dec  4 05:17:50 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v159: 321 pgs: 9 peering, 312 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 365 B/s wr, 45 op/s; 1.0 KiB/s, 20 objects/s recovering
Dec  4 05:17:52 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Dec  4 05:17:52 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Dec  4 05:17:52 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v160: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 38 op/s; 731 B/s, 16 objects/s recovering
Dec  4 05:17:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Dec  4 05:17:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Dec  4 05:17:53 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Dec  4 05:17:53 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Dec  4 05:17:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Dec  4 05:17:53 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Dec  4 05:17:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec  4 05:17:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Dec  4 05:17:53 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Dec  4 05:17:54 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Dec  4 05:17:54 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Dec  4 05:17:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:17:54 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec  4 05:17:54 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v162: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 33 op/s; 635 B/s, 14 objects/s recovering
Dec  4 05:17:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Dec  4 05:17:54 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Dec  4 05:17:55 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.f scrub starts
Dec  4 05:17:55 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.f scrub ok
Dec  4 05:17:55 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Dec  4 05:17:55 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Dec  4 05:17:55 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Dec  4 05:17:55 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Dec  4 05:17:56 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Dec  4 05:17:56 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Dec  4 05:17:56 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v163: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:17:57 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.a scrub starts
Dec  4 05:17:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:17:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:17:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:17:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:17:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:17:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:17:58 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Dec  4 05:17:58 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v164: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:17:59 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.a scrub ok
Dec  4 05:17:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Dec  4 05:17:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Dec  4 05:17:59 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Dec  4 05:17:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Dec  4 05:17:59 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Dec  4 05:17:59 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Dec  4 05:17:59 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Dec  4 05:17:59 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec  4 05:17:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Dec  4 05:17:59 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Dec  4 05:17:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:18:00 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Dec  4 05:18:00 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Dec  4 05:18:00 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Dec  4 05:18:00 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Dec  4 05:18:00 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec  4 05:18:00 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Dec  4 05:18:00 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec  4 05:18:00 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec  4 05:18:00 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Dec  4 05:18:00 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Dec  4 05:18:00 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:18:00 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:18:00 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:18:00 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:18:00 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:18:00 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:18:00 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:18:00 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:18:00 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:18:00 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:18:00 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:18:00 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:18:00 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v167: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:18:00 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Dec  4 05:18:00 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Dec  4 05:18:01 np0005545273 systemd-logind[798]: New session 34 of user zuul.
Dec  4 05:18:01 np0005545273 systemd[1]: Started Session 34 of User zuul.
Dec  4 05:18:01 np0005545273 podman[99645]: 2025-12-04 10:18:01.114176602 +0000 UTC m=+0.041491958 container create d97781e0f2da059c9abc24502ceb97263483e86a937f6009504e54f28e4e6169 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_euclid, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  4 05:18:01 np0005545273 systemd[1]: Started libpod-conmon-d97781e0f2da059c9abc24502ceb97263483e86a937f6009504e54f28e4e6169.scope.
Dec  4 05:18:01 np0005545273 podman[99645]: 2025-12-04 10:18:01.095744205 +0000 UTC m=+0.023059581 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:18:01 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:18:01 np0005545273 podman[99645]: 2025-12-04 10:18:01.247450758 +0000 UTC m=+0.174766134 container init d97781e0f2da059c9abc24502ceb97263483e86a937f6009504e54f28e4e6169 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:18:01 np0005545273 podman[99645]: 2025-12-04 10:18:01.255238497 +0000 UTC m=+0.182553853 container start d97781e0f2da059c9abc24502ceb97263483e86a937f6009504e54f28e4e6169 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_euclid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec  4 05:18:01 np0005545273 podman[99645]: 2025-12-04 10:18:01.258569567 +0000 UTC m=+0.185884924 container attach d97781e0f2da059c9abc24502ceb97263483e86a937f6009504e54f28e4e6169 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_euclid, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  4 05:18:01 np0005545273 flamboyant_euclid[99690]: 167 167
Dec  4 05:18:01 np0005545273 systemd[1]: libpod-d97781e0f2da059c9abc24502ceb97263483e86a937f6009504e54f28e4e6169.scope: Deactivated successfully.
Dec  4 05:18:01 np0005545273 podman[99645]: 2025-12-04 10:18:01.262030582 +0000 UTC m=+0.189345938 container died d97781e0f2da059c9abc24502ceb97263483e86a937f6009504e54f28e4e6169 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_euclid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec  4 05:18:01 np0005545273 systemd[1]: var-lib-containers-storage-overlay-6a7b3a43540bdb13484c37b1f663db10d1fca72385bbd477cc2332c86f70aaec-merged.mount: Deactivated successfully.
Dec  4 05:18:01 np0005545273 podman[99645]: 2025-12-04 10:18:01.298261582 +0000 UTC m=+0.225576928 container remove d97781e0f2da059c9abc24502ceb97263483e86a937f6009504e54f28e4e6169 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:18:01 np0005545273 systemd[1]: libpod-conmon-d97781e0f2da059c9abc24502ceb97263483e86a937f6009504e54f28e4e6169.scope: Deactivated successfully.
Dec  4 05:18:01 np0005545273 podman[99738]: 2025-12-04 10:18:01.436084347 +0000 UTC m=+0.039803807 container create ef6ba1fb4bd0e8133a45fe4e1a7fbe717da5ad44dfabf1c1bb3d96f2dd493a33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_haslett, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:18:01 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Dec  4 05:18:01 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Dec  4 05:18:01 np0005545273 systemd[1]: Started libpod-conmon-ef6ba1fb4bd0e8133a45fe4e1a7fbe717da5ad44dfabf1c1bb3d96f2dd493a33.scope.
Dec  4 05:18:01 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:18:01 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebcaead38cf3393ba71dbf6190550a3ff5b7fc43a388dfa6d333815c5441c784/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:18:01 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebcaead38cf3393ba71dbf6190550a3ff5b7fc43a388dfa6d333815c5441c784/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:18:01 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebcaead38cf3393ba71dbf6190550a3ff5b7fc43a388dfa6d333815c5441c784/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:18:01 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebcaead38cf3393ba71dbf6190550a3ff5b7fc43a388dfa6d333815c5441c784/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:18:01 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebcaead38cf3393ba71dbf6190550a3ff5b7fc43a388dfa6d333815c5441c784/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:18:01 np0005545273 podman[99738]: 2025-12-04 10:18:01.418838039 +0000 UTC m=+0.022557499 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:18:01 np0005545273 podman[99738]: 2025-12-04 10:18:01.523094849 +0000 UTC m=+0.126814319 container init ef6ba1fb4bd0e8133a45fe4e1a7fbe717da5ad44dfabf1c1bb3d96f2dd493a33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_haslett, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec  4 05:18:01 np0005545273 podman[99738]: 2025-12-04 10:18:01.531164685 +0000 UTC m=+0.134884185 container start ef6ba1fb4bd0e8133a45fe4e1a7fbe717da5ad44dfabf1c1bb3d96f2dd493a33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_haslett, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec  4 05:18:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Dec  4 05:18:01 np0005545273 podman[99738]: 2025-12-04 10:18:01.535874009 +0000 UTC m=+0.139593479 container attach ef6ba1fb4bd0e8133a45fe4e1a7fbe717da5ad44dfabf1c1bb3d96f2dd493a33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:18:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Dec  4 05:18:01 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Dec  4 05:18:01 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec  4 05:18:01 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec  4 05:18:01 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:18:01 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:18:01 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:18:01 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Dec  4 05:18:01 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Dec  4 05:18:01 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Dec  4 05:18:01 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  4 05:18:01 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Dec  4 05:18:01 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Dec  4 05:18:01 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 70 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=70 pruub=8.265105247s) [2] r=-1 lpr=70 pi=[57,70)/1 crt=53'483 lcod 0'0 active pruub 160.769821167s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:01 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 70 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=70 pruub=8.264783859s) [2] r=-1 lpr=70 pi=[57,70)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 160.769821167s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:01 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 70 pg[9.e( v 63'489 (0'0,63'489] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=70 pruub=8.267188072s) [2] r=-1 lpr=70 pi=[57,70)/1 crt=63'488 lcod 63'488 active pruub 160.772674561s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:01 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 70 pg[9.e( v 63'489 (0'0,63'489] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=70 pruub=8.267125130s) [2] r=-1 lpr=70 pi=[57,70)/1 crt=63'488 lcod 63'488 unknown NOTIFY pruub 160.772674561s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:01 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 70 pg[9.6( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=70 pruub=8.267120361s) [2] r=-1 lpr=70 pi=[57,70)/1 crt=53'483 lcod 0'0 active pruub 160.772811890s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:01 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 70 pg[9.6( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=70 pruub=8.266900063s) [2] r=-1 lpr=70 pi=[57,70)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 160.772811890s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:01 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 70 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=70 pruub=8.271852493s) [2] r=-1 lpr=70 pi=[57,70)/1 crt=60'484 lcod 60'484 active pruub 160.778198242s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:01 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 70 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=70 pruub=8.271541595s) [2] r=-1 lpr=70 pi=[57,70)/1 crt=60'484 lcod 60'484 unknown NOTIFY pruub 160.778198242s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:01 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 70 pg[9.16( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=70) [2] r=0 lpr=70 pi=[57,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:01 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 70 pg[9.6( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=70) [2] r=0 lpr=70 pi=[57,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:01 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 70 pg[9.1e( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=70) [2] r=0 lpr=70 pi=[57,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:01 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 70 pg[9.e( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=70) [2] r=0 lpr=70 pi=[57,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:02 np0005545273 zen_haslett[99755]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:18:02 np0005545273 zen_haslett[99755]: --> All data devices are unavailable
Dec  4 05:18:02 np0005545273 systemd[1]: libpod-ef6ba1fb4bd0e8133a45fe4e1a7fbe717da5ad44dfabf1c1bb3d96f2dd493a33.scope: Deactivated successfully.
Dec  4 05:18:02 np0005545273 python3.9[99859]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:18:02 np0005545273 podman[99873]: 2025-12-04 10:18:02.140418386 +0000 UTC m=+0.037454641 container died ef6ba1fb4bd0e8133a45fe4e1a7fbe717da5ad44dfabf1c1bb3d96f2dd493a33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec  4 05:18:02 np0005545273 systemd[1]: var-lib-containers-storage-overlay-ebcaead38cf3393ba71dbf6190550a3ff5b7fc43a388dfa6d333815c5441c784-merged.mount: Deactivated successfully.
Dec  4 05:18:02 np0005545273 podman[99873]: 2025-12-04 10:18:02.260166223 +0000 UTC m=+0.157202458 container remove ef6ba1fb4bd0e8133a45fe4e1a7fbe717da5ad44dfabf1c1bb3d96f2dd493a33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_haslett, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec  4 05:18:02 np0005545273 systemd[1]: libpod-conmon-ef6ba1fb4bd0e8133a45fe4e1a7fbe717da5ad44dfabf1c1bb3d96f2dd493a33.scope: Deactivated successfully.
Dec  4 05:18:02 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Dec  4 05:18:02 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Dec  4 05:18:02 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  4 05:18:02 np0005545273 podman[100007]: 2025-12-04 10:18:02.688122092 +0000 UTC m=+0.038233239 container create bc00081c653ef969a6ae9d8786b985aff5b3bf62178577f1e614ce95490856b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hellman, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:18:02 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Dec  4 05:18:02 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Dec  4 05:18:02 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Dec  4 05:18:02 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=0 lpr=71 pi=[57,71)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:02 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=0 lpr=71 pi=[57,71)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:02 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.e( v 63'489 (0'0,63'489] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=0 lpr=71 pi=[57,71)/1 crt=63'488 lcod 63'488 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:02 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.6( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=0 lpr=71 pi=[57,71)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:02 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=0 lpr=71 pi=[57,71)/1 crt=60'484 lcod 60'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:02 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.6( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=0 lpr=71 pi=[57,71)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:02 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=0 lpr=71 pi=[57,71)/1 crt=60'484 lcod 60'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:02 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.e( v 63'489 (0'0,63'489] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=0 lpr=71 pi=[57,71)/1 crt=63'488 lcod 63'488 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:02 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 71 pg[9.16( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[57,71)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:02 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 71 pg[9.1e( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[57,71)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:02 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 71 pg[9.16( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[57,71)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:02 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 71 pg[9.1e( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[57,71)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:02 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 71 pg[9.6( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[57,71)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:02 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 71 pg[9.e( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[57,71)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:02 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 71 pg[9.e( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[57,71)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:02 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 71 pg[9.6( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[57,71)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:02 np0005545273 systemd[1]: Started libpod-conmon-bc00081c653ef969a6ae9d8786b985aff5b3bf62178577f1e614ce95490856b3.scope.
Dec  4 05:18:02 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v170: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:18:02 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Dec  4 05:18:02 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Dec  4 05:18:02 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:18:02 np0005545273 podman[100007]: 2025-12-04 10:18:02.671816087 +0000 UTC m=+0.021927254 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:18:02 np0005545273 podman[100007]: 2025-12-04 10:18:02.780251619 +0000 UTC m=+0.130362816 container init bc00081c653ef969a6ae9d8786b985aff5b3bf62178577f1e614ce95490856b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hellman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:18:02 np0005545273 podman[100007]: 2025-12-04 10:18:02.790362094 +0000 UTC m=+0.140473241 container start bc00081c653ef969a6ae9d8786b985aff5b3bf62178577f1e614ce95490856b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hellman, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:18:02 np0005545273 podman[100007]: 2025-12-04 10:18:02.794523735 +0000 UTC m=+0.144634932 container attach bc00081c653ef969a6ae9d8786b985aff5b3bf62178577f1e614ce95490856b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec  4 05:18:02 np0005545273 interesting_hellman[100031]: 167 167
Dec  4 05:18:02 np0005545273 systemd[1]: libpod-bc00081c653ef969a6ae9d8786b985aff5b3bf62178577f1e614ce95490856b3.scope: Deactivated successfully.
Dec  4 05:18:02 np0005545273 podman[100007]: 2025-12-04 10:18:02.798354288 +0000 UTC m=+0.148465445 container died bc00081c653ef969a6ae9d8786b985aff5b3bf62178577f1e614ce95490856b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hellman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:18:02 np0005545273 systemd[1]: var-lib-containers-storage-overlay-31d4a3e0ae8a4b8825eb00c280d98fe893f7f8784d282261429939d026031113-merged.mount: Deactivated successfully.
Dec  4 05:18:02 np0005545273 podman[100007]: 2025-12-04 10:18:02.836827242 +0000 UTC m=+0.186938389 container remove bc00081c653ef969a6ae9d8786b985aff5b3bf62178577f1e614ce95490856b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hellman, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:18:02 np0005545273 systemd[1]: libpod-conmon-bc00081c653ef969a6ae9d8786b985aff5b3bf62178577f1e614ce95490856b3.scope: Deactivated successfully.
Dec  4 05:18:03 np0005545273 podman[100078]: 2025-12-04 10:18:03.037666638 +0000 UTC m=+0.050718182 container create 3d6477275e7a2cb9ab3c9d5b761f96b26370fd548e7eecc1c21c0931216b456a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_murdock, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:18:03 np0005545273 systemd[1]: Started libpod-conmon-3d6477275e7a2cb9ab3c9d5b761f96b26370fd548e7eecc1c21c0931216b456a.scope.
Dec  4 05:18:03 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:18:03 np0005545273 podman[100078]: 2025-12-04 10:18:03.010726024 +0000 UTC m=+0.023777648 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:18:03 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c22efa9e49375f9fc422ddba993ebeb84c5656e6f94618c6a922591f1da5d721/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:18:03 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c22efa9e49375f9fc422ddba993ebeb84c5656e6f94618c6a922591f1da5d721/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:18:03 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c22efa9e49375f9fc422ddba993ebeb84c5656e6f94618c6a922591f1da5d721/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:18:03 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c22efa9e49375f9fc422ddba993ebeb84c5656e6f94618c6a922591f1da5d721/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:18:03 np0005545273 podman[100078]: 2025-12-04 10:18:03.120250383 +0000 UTC m=+0.133301947 container init 3d6477275e7a2cb9ab3c9d5b761f96b26370fd548e7eecc1c21c0931216b456a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:18:03 np0005545273 podman[100078]: 2025-12-04 10:18:03.127305284 +0000 UTC m=+0.140356828 container start 3d6477275e7a2cb9ab3c9d5b761f96b26370fd548e7eecc1c21c0931216b456a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_murdock, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec  4 05:18:03 np0005545273 podman[100078]: 2025-12-04 10:18:03.13167039 +0000 UTC m=+0.144721954 container attach 3d6477275e7a2cb9ab3c9d5b761f96b26370fd548e7eecc1c21c0931216b456a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_murdock, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:18:03 np0005545273 great_murdock[100130]: {
Dec  4 05:18:03 np0005545273 great_murdock[100130]:    "0": [
Dec  4 05:18:03 np0005545273 great_murdock[100130]:        {
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            "devices": [
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "/dev/loop3"
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            ],
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            "lv_name": "ceph_lv0",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            "lv_size": "21470642176",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            "name": "ceph_lv0",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            "tags": {
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.cluster_name": "ceph",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.crush_device_class": "",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.encrypted": "0",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.objectstore": "bluestore",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.osd_id": "0",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.type": "block",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.vdo": "0",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.with_tpm": "0"
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            },
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            "type": "block",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            "vg_name": "ceph_vg0"
Dec  4 05:18:03 np0005545273 great_murdock[100130]:        }
Dec  4 05:18:03 np0005545273 great_murdock[100130]:    ],
Dec  4 05:18:03 np0005545273 great_murdock[100130]:    "1": [
Dec  4 05:18:03 np0005545273 great_murdock[100130]:        {
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            "devices": [
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "/dev/loop4"
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            ],
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            "lv_name": "ceph_lv1",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            "lv_size": "21470642176",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            "name": "ceph_lv1",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            "tags": {
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.cluster_name": "ceph",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.crush_device_class": "",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.encrypted": "0",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.objectstore": "bluestore",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.osd_id": "1",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.type": "block",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.vdo": "0",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.with_tpm": "0"
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            },
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            "type": "block",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            "vg_name": "ceph_vg1"
Dec  4 05:18:03 np0005545273 great_murdock[100130]:        }
Dec  4 05:18:03 np0005545273 great_murdock[100130]:    ],
Dec  4 05:18:03 np0005545273 great_murdock[100130]:    "2": [
Dec  4 05:18:03 np0005545273 great_murdock[100130]:        {
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            "devices": [
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "/dev/loop5"
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            ],
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            "lv_name": "ceph_lv2",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            "lv_size": "21470642176",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            "name": "ceph_lv2",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            "tags": {
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.cluster_name": "ceph",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.crush_device_class": "",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.encrypted": "0",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.objectstore": "bluestore",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.osd_id": "2",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.type": "block",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.vdo": "0",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:                "ceph.with_tpm": "0"
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            },
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            "type": "block",
Dec  4 05:18:03 np0005545273 great_murdock[100130]:            "vg_name": "ceph_vg2"
Dec  4 05:18:03 np0005545273 great_murdock[100130]:        }
Dec  4 05:18:03 np0005545273 great_murdock[100130]:    ]
Dec  4 05:18:03 np0005545273 great_murdock[100130]: }
Dec  4 05:18:03 np0005545273 systemd[1]: libpod-3d6477275e7a2cb9ab3c9d5b761f96b26370fd548e7eecc1c21c0931216b456a.scope: Deactivated successfully.
Dec  4 05:18:03 np0005545273 podman[100078]: 2025-12-04 10:18:03.502856691 +0000 UTC m=+0.515908235 container died 3d6477275e7a2cb9ab3c9d5b761f96b26370fd548e7eecc1c21c0931216b456a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  4 05:18:03 np0005545273 systemd[1]: var-lib-containers-storage-overlay-c22efa9e49375f9fc422ddba993ebeb84c5656e6f94618c6a922591f1da5d721-merged.mount: Deactivated successfully.
Dec  4 05:18:03 np0005545273 podman[100078]: 2025-12-04 10:18:03.55513935 +0000 UTC m=+0.568190904 container remove 3d6477275e7a2cb9ab3c9d5b761f96b26370fd548e7eecc1c21c0931216b456a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_murdock, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec  4 05:18:03 np0005545273 systemd[1]: libpod-conmon-3d6477275e7a2cb9ab3c9d5b761f96b26370fd548e7eecc1c21c0931216b456a.scope: Deactivated successfully.
Dec  4 05:18:03 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Dec  4 05:18:03 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec  4 05:18:03 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Dec  4 05:18:03 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Dec  4 05:18:03 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 72 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=72 pruub=8.474749565s) [2] r=-1 lpr=72 pi=[65,72)/1 crt=53'483 active pruub 169.307846069s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:03 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 72 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=72 pruub=8.474720001s) [2] r=-1 lpr=72 pi=[65,72)/1 crt=53'483 unknown NOTIFY pruub 169.307846069s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:03 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 72 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=64/65 n=7 ec=57/47 lis/c=64/64 les/c/f=65/65/0 sis=72 pruub=15.456710815s) [2] r=-1 lpr=72 pi=[64,72)/1 crt=63'485 active pruub 176.290176392s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:03 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 72 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=64/65 n=7 ec=57/47 lis/c=64/64 les/c/f=65/65/0 sis=72 pruub=15.456697464s) [2] r=-1 lpr=72 pi=[64,72)/1 crt=63'485 unknown NOTIFY pruub 176.290176392s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:03 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 72 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=72 pruub=8.474306107s) [2] r=-1 lpr=72 pi=[65,72)/1 crt=63'485 active pruub 169.308151245s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:03 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 72 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=65/66 n=7 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=72 pruub=8.474275589s) [2] r=-1 lpr=72 pi=[65,72)/1 crt=63'487 active pruub 169.308166504s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:03 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 72 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=72 pruub=8.474267006s) [2] r=-1 lpr=72 pi=[65,72)/1 crt=63'485 unknown NOTIFY pruub 169.308151245s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:03 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 72 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=65/66 n=7 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=72 pruub=8.474235535s) [2] r=-1 lpr=72 pi=[65,72)/1 crt=63'487 unknown NOTIFY pruub 169.308166504s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:03 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Dec  4 05:18:03 np0005545273 python3.9[100233]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:18:03 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 72 pg[9.17( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=72) [2] r=0 lpr=72 pi=[65,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:03 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 72 pg[9.f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=64/64 les/c/f=65/65/0 sis=72) [2] r=0 lpr=72 pi=[64,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:03 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 72 pg[9.7( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=72) [2] r=0 lpr=72 pi=[65,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:03 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 72 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=72) [2] r=0 lpr=72 pi=[65,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:03 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=71/72 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] async=[2] r=0 lpr=71 pi=[57,71)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:18:03 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.6( v 53'483 (0'0,53'483] local-lis/les=71/72 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] async=[2] r=0 lpr=71 pi=[57,71)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:18:03 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=71/72 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] async=[2] r=0 lpr=71 pi=[57,71)/1 crt=63'485 lcod 60'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:18:03 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.e( v 63'489 (0'0,63'489] local-lis/les=71/72 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] async=[2] r=0 lpr=71 pi=[57,71)/1 crt=63'489 lcod 63'488 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:18:04 np0005545273 podman[100312]: 2025-12-04 10:18:04.065606773 +0000 UTC m=+0.060329616 container create d8fda11346c3042b09c501d51ff1cad275192d12cd0c0b972375656641cb0dce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_saha, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:18:04 np0005545273 systemd[1]: Started libpod-conmon-d8fda11346c3042b09c501d51ff1cad275192d12cd0c0b972375656641cb0dce.scope.
Dec  4 05:18:04 np0005545273 podman[100312]: 2025-12-04 10:18:04.037677425 +0000 UTC m=+0.032400328 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:18:04 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:18:04 np0005545273 podman[100312]: 2025-12-04 10:18:04.163901529 +0000 UTC m=+0.158624362 container init d8fda11346c3042b09c501d51ff1cad275192d12cd0c0b972375656641cb0dce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_saha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:18:04 np0005545273 podman[100312]: 2025-12-04 10:18:04.171932874 +0000 UTC m=+0.166655687 container start d8fda11346c3042b09c501d51ff1cad275192d12cd0c0b972375656641cb0dce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_saha, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:18:04 np0005545273 podman[100312]: 2025-12-04 10:18:04.175772037 +0000 UTC m=+0.170494850 container attach d8fda11346c3042b09c501d51ff1cad275192d12cd0c0b972375656641cb0dce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_saha, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:18:04 np0005545273 inspiring_saha[100328]: 167 167
Dec  4 05:18:04 np0005545273 systemd[1]: libpod-d8fda11346c3042b09c501d51ff1cad275192d12cd0c0b972375656641cb0dce.scope: Deactivated successfully.
Dec  4 05:18:04 np0005545273 conmon[100328]: conmon d8fda11346c3042b09c5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d8fda11346c3042b09c501d51ff1cad275192d12cd0c0b972375656641cb0dce.scope/container/memory.events
Dec  4 05:18:04 np0005545273 podman[100312]: 2025-12-04 10:18:04.180630615 +0000 UTC m=+0.175353438 container died d8fda11346c3042b09c501d51ff1cad275192d12cd0c0b972375656641cb0dce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_saha, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec  4 05:18:04 np0005545273 systemd[1]: var-lib-containers-storage-overlay-60ca272c40489e61977fde74399e58b7bc65ba674e765739c19bb169495aacad-merged.mount: Deactivated successfully.
Dec  4 05:18:04 np0005545273 podman[100312]: 2025-12-04 10:18:04.227973714 +0000 UTC m=+0.222696527 container remove d8fda11346c3042b09c501d51ff1cad275192d12cd0c0b972375656641cb0dce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_saha, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Dec  4 05:18:04 np0005545273 systemd[1]: libpod-conmon-d8fda11346c3042b09c501d51ff1cad275192d12cd0c0b972375656641cb0dce.scope: Deactivated successfully.
Dec  4 05:18:04 np0005545273 podman[100355]: 2025-12-04 10:18:04.412235048 +0000 UTC m=+0.045502166 container create 95101af0abecff9b09e6f052703bf0f2caa85deac456dfc3d27f27a5fc404389 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Dec  4 05:18:04 np0005545273 systemd[1]: Started libpod-conmon-95101af0abecff9b09e6f052703bf0f2caa85deac456dfc3d27f27a5fc404389.scope.
Dec  4 05:18:04 np0005545273 podman[100355]: 2025-12-04 10:18:04.393451362 +0000 UTC m=+0.026718500 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:18:04 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:18:04 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f08ea086f1bb68368ac75ac9b801eff73241662f4eb23e4eb9cc6b1a6f5c8a39/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:18:04 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f08ea086f1bb68368ac75ac9b801eff73241662f4eb23e4eb9cc6b1a6f5c8a39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:18:04 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f08ea086f1bb68368ac75ac9b801eff73241662f4eb23e4eb9cc6b1a6f5c8a39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:18:04 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f08ea086f1bb68368ac75ac9b801eff73241662f4eb23e4eb9cc6b1a6f5c8a39/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:18:04 np0005545273 podman[100355]: 2025-12-04 10:18:04.525497017 +0000 UTC m=+0.158764155 container init 95101af0abecff9b09e6f052703bf0f2caa85deac456dfc3d27f27a5fc404389 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_goldstine, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:18:04 np0005545273 podman[100355]: 2025-12-04 10:18:04.538170304 +0000 UTC m=+0.171437422 container start 95101af0abecff9b09e6f052703bf0f2caa85deac456dfc3d27f27a5fc404389 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:18:04 np0005545273 podman[100355]: 2025-12-04 10:18:04.54621135 +0000 UTC m=+0.179478488 container attach 95101af0abecff9b09e6f052703bf0f2caa85deac456dfc3d27f27a5fc404389 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_goldstine, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:18:04 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Dec  4 05:18:04 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Dec  4 05:18:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:18:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Dec  4 05:18:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Dec  4 05:18:04 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Dec  4 05:18:04 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 73 pg[9.e( v 63'489 (0'0,63'489] local-lis/les=0/0 n=7 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 pct=0'0 crt=63'489 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:04 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 73 pg[9.e( v 63'489 (0'0,63'489] local-lis/les=0/0 n=7 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 crt=63'489 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:04 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 73 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=0/0 n=6 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 pct=0'0 crt=63'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:04 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 73 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=0/0 n=6 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 crt=63'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:04 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 73 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[65,73)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:04 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 73 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[65,73)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:04 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 73 pg[9.6( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:04 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 73 pg[9.6( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:04 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=71/72 n=6 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73 pruub=15.129341125s) [2] async=[2] r=-1 lpr=73 pi=[57,73)/1 crt=53'483 lcod 0'0 active pruub 170.542831421s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:04 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=71/72 n=6 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73 pruub=15.129242897s) [2] r=-1 lpr=73 pi=[57,73)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 170.542831421s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:04 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.e( v 63'489 (0'0,63'489] local-lis/les=71/72 n=7 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73 pruub=15.131555557s) [2] async=[2] r=-1 lpr=73 pi=[57,73)/1 crt=63'489 lcod 63'488 active pruub 170.545547485s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:04 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.e( v 63'489 (0'0,63'489] local-lis/les=71/72 n=7 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73 pruub=15.131464005s) [2] r=-1 lpr=73 pi=[57,73)/1 crt=63'489 lcod 63'488 unknown NOTIFY pruub 170.545547485s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:04 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=71/72 n=6 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73 pruub=15.131343842s) [2] async=[2] r=-1 lpr=73 pi=[57,73)/1 crt=63'485 lcod 60'484 active pruub 170.545532227s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:04 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 73 pg[9.7( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[65,73)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:04 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 73 pg[9.7( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[65,73)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:04 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 73 pg[9.f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[64,73)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:04 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 73 pg[9.f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[64,73)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:04 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 73 pg[9.17( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[65,73)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:04 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 73 pg[9.17( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[65,73)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:04 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=71/72 n=6 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73 pruub=15.131286621s) [2] r=-1 lpr=73 pi=[57,73)/1 crt=63'485 lcod 60'484 unknown NOTIFY pruub 170.545532227s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:04 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.6( v 53'483 (0'0,53'483] local-lis/les=71/72 n=7 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73 pruub=15.130816460s) [2] async=[2] r=-1 lpr=73 pi=[57,73)/1 crt=53'483 lcod 0'0 active pruub 170.545532227s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:04 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.6( v 53'483 (0'0,53'483] local-lis/les=71/72 n=7 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73 pruub=15.130729675s) [2] r=-1 lpr=73 pi=[57,73)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 170.545532227s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:04 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 73 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:04 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 73 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:04 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 73 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=64/65 n=7 ec=57/47 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=0 lpr=73 pi=[64,73)/1 crt=63'485 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:04 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 73 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=65/66 n=7 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=73) [2]/[0] r=0 lpr=73 pi=[65,73)/1 crt=63'487 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:04 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 73 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=73) [2]/[0] r=0 lpr=73 pi=[65,73)/1 crt=63'485 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:04 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 73 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=64/65 n=7 ec=57/47 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] r=0 lpr=73 pi=[64,73)/1 crt=63'485 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:04 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 73 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=65/66 n=7 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=73) [2]/[0] r=0 lpr=73 pi=[65,73)/1 crt=63'487 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:04 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 73 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=73) [2]/[0] r=0 lpr=73 pi=[65,73)/1 crt=63'485 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:04 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 73 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=73) [2]/[0] r=0 lpr=73 pi=[65,73)/1 crt=53'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:04 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 73 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=73) [2]/[0] r=0 lpr=73 pi=[65,73)/1 crt=53'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:04 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec  4 05:18:04 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v173: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:18:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Dec  4 05:18:04 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Dec  4 05:18:05 np0005545273 lvm[100451]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:18:05 np0005545273 lvm[100451]: VG ceph_vg1 finished
Dec  4 05:18:05 np0005545273 lvm[100450]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:18:05 np0005545273 lvm[100450]: VG ceph_vg0 finished
Dec  4 05:18:05 np0005545273 lvm[100453]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:18:05 np0005545273 lvm[100453]: VG ceph_vg2 finished
Dec  4 05:18:05 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 4.a scrub starts
Dec  4 05:18:05 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 4.a scrub ok
Dec  4 05:18:05 np0005545273 admiring_goldstine[100372]: {}
Dec  4 05:18:05 np0005545273 systemd[1]: libpod-95101af0abecff9b09e6f052703bf0f2caa85deac456dfc3d27f27a5fc404389.scope: Deactivated successfully.
Dec  4 05:18:05 np0005545273 systemd[1]: libpod-95101af0abecff9b09e6f052703bf0f2caa85deac456dfc3d27f27a5fc404389.scope: Consumed 1.482s CPU time.
Dec  4 05:18:05 np0005545273 podman[100355]: 2025-12-04 10:18:05.476409292 +0000 UTC m=+1.109676410 container died 95101af0abecff9b09e6f052703bf0f2caa85deac456dfc3d27f27a5fc404389 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_goldstine, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Dec  4 05:18:05 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Dec  4 05:18:05 np0005545273 systemd[1]: var-lib-containers-storage-overlay-f08ea086f1bb68368ac75ac9b801eff73241662f4eb23e4eb9cc6b1a6f5c8a39-merged.mount: Deactivated successfully.
Dec  4 05:18:05 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Dec  4 05:18:05 np0005545273 podman[100355]: 2025-12-04 10:18:05.529753697 +0000 UTC m=+1.163020815 container remove 95101af0abecff9b09e6f052703bf0f2caa85deac456dfc3d27f27a5fc404389 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_goldstine, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:18:05 np0005545273 systemd[1]: libpod-conmon-95101af0abecff9b09e6f052703bf0f2caa85deac456dfc3d27f27a5fc404389.scope: Deactivated successfully.
Dec  4 05:18:05 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Dec  4 05:18:05 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:18:05 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec  4 05:18:05 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Dec  4 05:18:05 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Dec  4 05:18:05 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:18:05 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 74 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=73/74 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[65,73)/1 crt=63'485 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:18:05 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:18:05 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 74 pg[9.e( v 63'489 (0'0,63'489] local-lis/les=73/74 n=7 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 crt=63'489 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:18:05 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 74 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=73/74 n=6 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 crt=63'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:18:05 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 74 pg[9.6( v 53'483 (0'0,53'483] local-lis/les=73/74 n=7 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:18:05 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 74 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=73/74 n=6 ec=57/47 lis/c=71/57 les/c/f=72/58/0 sis=73) [2] r=0 lpr=73 pi=[57,73)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:18:05 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 74 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=73/74 n=7 ec=57/47 lis/c=64/64 les/c/f=65/65/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[64,73)/1 crt=63'485 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:18:05 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 74 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=73/74 n=7 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[65,73)/1 crt=63'487 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:18:05 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 74 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=73/74 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[65,73)/1 crt=53'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:18:05 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:18:05 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Dec  4 05:18:05 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec  4 05:18:05 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:18:05 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:18:06 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 3.e scrub starts
Dec  4 05:18:06 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 3.e scrub ok
Dec  4 05:18:06 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Dec  4 05:18:06 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.8( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=74 pruub=11.337594986s) [2] r=-1 lpr=74 pi=[57,74)/1 crt=53'483 lcod 0'0 active pruub 168.773162842s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:06 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.8( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=74 pruub=11.337542534s) [2] r=-1 lpr=74 pi=[57,74)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 168.773162842s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:06 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Dec  4 05:18:06 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.18( v 63'487 (0'0,63'487] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=74 pruub=11.336889267s) [2] r=-1 lpr=74 pi=[57,74)/1 crt=63'486 lcod 63'486 active pruub 168.773162842s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:06 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 74 pg[9.8( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=74) [2] r=0 lpr=74 pi=[57,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:06 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.18( v 63'487 (0'0,63'487] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=74 pruub=11.336738586s) [2] r=-1 lpr=74 pi=[57,74)/1 crt=63'486 lcod 63'486 unknown NOTIFY pruub 168.773162842s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:06 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 74 pg[9.18( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=74) [2] r=0 lpr=74 pi=[57,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:06 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Dec  4 05:18:06 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Dec  4 05:18:06 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 75 pg[9.8( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=75) [2]/[1] r=-1 lpr=75 pi=[57,75)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:06 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 75 pg[9.8( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=75) [2]/[1] r=-1 lpr=75 pi=[57,75)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:06 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 75 pg[9.18( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=75) [2]/[1] r=-1 lpr=75 pi=[57,75)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:06 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 75 pg[9.18( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=75) [2]/[1] r=-1 lpr=75 pi=[57,75)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:06 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Dec  4 05:18:06 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 75 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=73/65 les/c/f=74/66/0 sis=75) [2] r=0 lpr=75 pi=[65,75)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:06 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 75 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=73/65 les/c/f=74/66/0 sis=75) [2] r=0 lpr=75 pi=[65,75)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:06 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 75 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=0/0 n=7 ec=57/47 lis/c=73/65 les/c/f=74/66/0 sis=75) [2] r=0 lpr=75 pi=[65,75)/1 pct=0'0 crt=63'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:06 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 75 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=0/0 n=7 ec=57/47 lis/c=73/64 les/c/f=74/65/0 sis=75) [2] r=0 lpr=75 pi=[64,75)/1 pct=0'0 crt=63'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:06 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 75 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=0/0 n=7 ec=57/47 lis/c=73/64 les/c/f=74/65/0 sis=75) [2] r=0 lpr=75 pi=[64,75)/1 crt=63'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:06 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 75 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=0/0 n=7 ec=57/47 lis/c=73/65 les/c/f=74/66/0 sis=75) [2] r=0 lpr=75 pi=[65,75)/1 crt=63'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:06 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 75 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=0/0 n=6 ec=57/47 lis/c=73/65 les/c/f=74/66/0 sis=75) [2] r=0 lpr=75 pi=[65,75)/1 pct=0'0 crt=63'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:06 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 75 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=0/0 n=6 ec=57/47 lis/c=73/65 les/c/f=74/66/0 sis=75) [2] r=0 lpr=75 pi=[65,75)/1 crt=63'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:06 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.8( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=75) [2]/[1] r=0 lpr=75 pi=[57,75)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:06 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.18( v 63'487 (0'0,63'487] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=75) [2]/[1] r=0 lpr=75 pi=[57,75)/1 crt=63'486 lcod 63'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:06 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.8( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=75) [2]/[1] r=0 lpr=75 pi=[57,75)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:06 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.18( v 63'487 (0'0,63'487] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=75) [2]/[1] r=0 lpr=75 pi=[57,75)/1 crt=63'486 lcod 63'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:06 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 75 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=73/74 n=7 ec=57/47 lis/c=73/65 les/c/f=74/66/0 sis=75 pruub=14.978973389s) [2] async=[2] r=-1 lpr=75 pi=[65,75)/1 crt=63'487 active pruub 178.742630005s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:06 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 75 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=73/74 n=7 ec=57/47 lis/c=73/65 les/c/f=74/66/0 sis=75 pruub=14.978861809s) [2] r=-1 lpr=75 pi=[65,75)/1 crt=63'487 unknown NOTIFY pruub 178.742630005s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:06 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 75 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=73/74 n=6 ec=57/47 lis/c=73/65 les/c/f=74/66/0 sis=75 pruub=14.978198051s) [2] async=[2] r=-1 lpr=75 pi=[65,75)/1 crt=53'483 active pruub 178.742858887s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:06 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 75 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=73/74 n=7 ec=57/47 lis/c=73/64 les/c/f=74/65/0 sis=75 pruub=14.977583885s) [2] async=[2] r=-1 lpr=75 pi=[64,75)/1 crt=63'485 active pruub 178.742523193s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:06 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 75 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=73/74 n=6 ec=57/47 lis/c=73/65 les/c/f=74/66/0 sis=75 pruub=14.970883369s) [2] async=[2] r=-1 lpr=75 pi=[65,75)/1 crt=63'485 active pruub 178.735977173s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:06 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 75 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=73/74 n=7 ec=57/47 lis/c=73/64 les/c/f=74/65/0 sis=75 pruub=14.977322578s) [2] r=-1 lpr=75 pi=[64,75)/1 crt=63'485 unknown NOTIFY pruub 178.742523193s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:06 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 75 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=73/74 n=6 ec=57/47 lis/c=73/65 les/c/f=74/66/0 sis=75 pruub=14.970640182s) [2] r=-1 lpr=75 pi=[65,75)/1 crt=63'485 unknown NOTIFY pruub 178.735977173s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:06 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 75 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=73/74 n=6 ec=57/47 lis/c=73/65 les/c/f=74/66/0 sis=75 pruub=14.977385521s) [2] r=-1 lpr=75 pi=[65,75)/1 crt=53'483 unknown NOTIFY pruub 178.742858887s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:06 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v176: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 207 B/s, 5 objects/s recovering
Dec  4 05:18:06 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Dec  4 05:18:06 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Dec  4 05:18:07 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Dec  4 05:18:07 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Dec  4 05:18:07 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Dec  4 05:18:07 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec  4 05:18:07 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Dec  4 05:18:07 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Dec  4 05:18:07 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 76 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=75/76 n=6 ec=57/47 lis/c=73/65 les/c/f=74/66/0 sis=75) [2] r=0 lpr=75 pi=[65,75)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:18:07 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 76 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=75/76 n=6 ec=57/47 lis/c=73/65 les/c/f=74/66/0 sis=75) [2] r=0 lpr=75 pi=[65,75)/1 crt=63'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:18:07 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 76 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=75/76 n=7 ec=57/47 lis/c=73/64 les/c/f=74/65/0 sis=75) [2] r=0 lpr=75 pi=[64,75)/1 crt=63'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:18:07 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 76 pg[9.18( v 63'487 (0'0,63'487] local-lis/les=75/76 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=75) [2]/[1] async=[2] r=0 lpr=75 pi=[57,75)/1 crt=63'487 lcod 63'486 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:18:07 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 76 pg[9.8( v 53'483 (0'0,53'483] local-lis/les=75/76 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=75) [2]/[1] async=[2] r=0 lpr=75 pi=[57,75)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:18:07 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 76 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=75/76 n=7 ec=57/47 lis/c=73/65 les/c/f=74/66/0 sis=75) [2] r=0 lpr=75 pi=[65,75)/1 crt=63'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:18:07 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Dec  4 05:18:07 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec  4 05:18:08 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Dec  4 05:18:08 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Dec  4 05:18:08 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 77 pg[9.18( v 63'487 (0'0,63'487] local-lis/les=75/76 n=6 ec=57/47 lis/c=75/57 les/c/f=76/58/0 sis=77 pruub=14.993161201s) [2] async=[2] r=-1 lpr=77 pi=[57,77)/1 crt=63'487 lcod 63'486 active pruub 174.463943481s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:08 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 77 pg[9.18( v 63'487 (0'0,63'487] local-lis/les=75/76 n=6 ec=57/47 lis/c=75/57 les/c/f=76/58/0 sis=77 pruub=14.993092537s) [2] r=-1 lpr=77 pi=[57,77)/1 crt=63'487 lcod 63'486 unknown NOTIFY pruub 174.463943481s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:08 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 77 pg[9.8( v 53'483 (0'0,53'483] local-lis/les=75/76 n=7 ec=57/47 lis/c=75/57 les/c/f=76/58/0 sis=77 pruub=14.993423462s) [2] async=[2] r=-1 lpr=77 pi=[57,77)/1 crt=53'483 lcod 0'0 active pruub 174.464141846s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:08 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 77 pg[9.8( v 53'483 (0'0,53'483] local-lis/les=75/76 n=7 ec=57/47 lis/c=75/57 les/c/f=76/58/0 sis=77 pruub=14.993075371s) [2] r=-1 lpr=77 pi=[57,77)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 174.464141846s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:08 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Dec  4 05:18:08 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 77 pg[9.18( v 63'487 (0'0,63'487] local-lis/les=0/0 n=6 ec=57/47 lis/c=75/57 les/c/f=76/58/0 sis=77) [2] r=0 lpr=77 pi=[57,77)/1 pct=0'0 crt=63'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:08 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 77 pg[9.8( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=75/57 les/c/f=76/58/0 sis=77) [2] r=0 lpr=77 pi=[57,77)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:08 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 77 pg[9.18( v 63'487 (0'0,63'487] local-lis/les=0/0 n=6 ec=57/47 lis/c=75/57 les/c/f=76/58/0 sis=77) [2] r=0 lpr=77 pi=[57,77)/1 crt=63'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:08 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 77 pg[9.8( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=75/57 les/c/f=76/58/0 sis=77) [2] r=0 lpr=77 pi=[57,77)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:08 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v179: 321 pgs: 2 remapped+peering, 319 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 207 B/s, 5 objects/s recovering
Dec  4 05:18:09 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Dec  4 05:18:09 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Dec  4 05:18:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:18:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Dec  4 05:18:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Dec  4 05:18:09 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Dec  4 05:18:09 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 78 pg[9.18( v 63'487 (0'0,63'487] local-lis/les=77/78 n=6 ec=57/47 lis/c=75/57 les/c/f=76/58/0 sis=77) [2] r=0 lpr=77 pi=[57,77)/1 crt=63'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:18:09 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 78 pg[9.8( v 53'483 (0'0,53'483] local-lis/les=77/78 n=7 ec=57/47 lis/c=75/57 les/c/f=76/58/0 sis=77) [2] r=0 lpr=77 pi=[57,77)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:18:10 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Dec  4 05:18:10 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Dec  4 05:18:10 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Dec  4 05:18:10 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Dec  4 05:18:10 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v181: 321 pgs: 2 remapped+peering, 319 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:18:11 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Dec  4 05:18:11 np0005545273 systemd[1]: session-34.scope: Deactivated successfully.
Dec  4 05:18:11 np0005545273 systemd[1]: session-34.scope: Consumed 8.455s CPU time.
Dec  4 05:18:11 np0005545273 systemd-logind[798]: Session 34 logged out. Waiting for processes to exit.
Dec  4 05:18:11 np0005545273 systemd-logind[798]: Removed session 34.
Dec  4 05:18:11 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Dec  4 05:18:12 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v182: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 281 B/s, 6 objects/s recovering
Dec  4 05:18:12 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Dec  4 05:18:12 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Dec  4 05:18:12 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Dec  4 05:18:12 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec  4 05:18:12 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Dec  4 05:18:12 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Dec  4 05:18:13 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Dec  4 05:18:13 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Dec  4 05:18:13 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Dec  4 05:18:13 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Dec  4 05:18:13 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Dec  4 05:18:13 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Dec  4 05:18:13 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Dec  4 05:18:13 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec  4 05:18:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:18:14 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v184: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 277 B/s, 6 objects/s recovering
Dec  4 05:18:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Dec  4 05:18:14 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Dec  4 05:18:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Dec  4 05:18:14 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec  4 05:18:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Dec  4 05:18:14 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Dec  4 05:18:14 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Dec  4 05:18:15 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec  4 05:18:16 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 4.e scrub starts
Dec  4 05:18:16 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 4.e scrub ok
Dec  4 05:18:16 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Dec  4 05:18:16 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Dec  4 05:18:16 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v186: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 238 B/s, 5 objects/s recovering
Dec  4 05:18:16 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Dec  4 05:18:16 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Dec  4 05:18:16 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Dec  4 05:18:16 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Dec  4 05:18:16 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec  4 05:18:16 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Dec  4 05:18:16 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Dec  4 05:18:17 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Dec  4 05:18:17 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Dec  4 05:18:17 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec  4 05:18:18 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.a scrub starts
Dec  4 05:18:18 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.a scrub ok
Dec  4 05:18:18 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v188: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:18:18 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Dec  4 05:18:18 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Dec  4 05:18:18 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Dec  4 05:18:18 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec  4 05:18:18 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Dec  4 05:18:18 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Dec  4 05:18:18 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Dec  4 05:18:19 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Dec  4 05:18:19 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Dec  4 05:18:19 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:18:19 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=81 pruub=14.316296577s) [2] r=-1 lpr=81 pi=[57,81)/1 crt=53'483 lcod 0'0 active pruub 184.772720337s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:19 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=81 pruub=14.315694809s) [2] r=-1 lpr=81 pi=[57,81)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 184.772720337s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:19 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=81 pruub=14.316226959s) [2] r=-1 lpr=81 pi=[57,81)/1 crt=63'486 lcod 63'486 active pruub 184.773544312s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:19 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=81 pruub=14.316187859s) [2] r=-1 lpr=81 pi=[57,81)/1 crt=63'486 lcod 63'486 unknown NOTIFY pruub 184.773544312s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:19 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=0 lpr=81 pi=[57,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:19 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=81) [2] r=0 lpr=81 pi=[57,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:19 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Dec  4 05:18:19 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Dec  4 05:18:19 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Dec  4 05:18:19 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] r=-1 lpr=83 pi=[57,83)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:19 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] r=-1 lpr=83 pi=[57,83)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:19 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] r=-1 lpr=83 pi=[57,83)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:19 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] r=-1 lpr=83 pi=[57,83)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:19 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec  4 05:18:19 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] r=0 lpr=83 pi=[57,83)/1 crt=63'486 lcod 63'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:19 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] r=0 lpr=83 pi=[57,83)/1 crt=63'486 lcod 63'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:19 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.c( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] r=0 lpr=83 pi=[57,83)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:19 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.c( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] r=0 lpr=83 pi=[57,83)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:20 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.d scrub starts
Dec  4 05:18:20 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.d scrub ok
Dec  4 05:18:20 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v191: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:18:20 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Dec  4 05:18:20 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Dec  4 05:18:20 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Dec  4 05:18:20 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec  4 05:18:20 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Dec  4 05:18:20 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Dec  4 05:18:20 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Dec  4 05:18:21 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Dec  4 05:18:21 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 84 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=83/84 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] async=[2] r=0 lpr=83 pi=[57,83)/1 crt=63'487 lcod 63'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:18:21 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Dec  4 05:18:21 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 84 pg[9.c( v 53'483 (0'0,53'483] local-lis/les=83/84 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] async=[2] r=0 lpr=83 pi=[57,83)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:18:21 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Dec  4 05:18:21 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Dec  4 05:18:21 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Dec  4 05:18:21 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 85 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=0/0 n=6 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85) [2] r=0 lpr=85 pi=[57,85)/1 pct=0'0 crt=63'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:21 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 85 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=0/0 n=6 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85) [2] r=0 lpr=85 pi=[57,85)/1 crt=63'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:21 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 85 pg[9.c( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85) [2] r=0 lpr=85 pi=[57,85)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:21 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 85 pg[9.c( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85) [2] r=0 lpr=85 pi=[57,85)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:21 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[9.c( v 53'483 (0'0,53'483] local-lis/les=83/84 n=7 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85 pruub=15.411987305s) [2] async=[2] r=-1 lpr=85 pi=[57,85)/1 crt=53'483 lcod 0'0 active pruub 188.133209229s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:21 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=83/84 n=6 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85 pruub=15.409049034s) [2] async=[2] r=-1 lpr=85 pi=[57,85)/1 crt=63'487 lcod 63'486 active pruub 188.130508423s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:21 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[9.c( v 53'483 (0'0,53'483] local-lis/les=83/84 n=7 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85 pruub=15.411731720s) [2] r=-1 lpr=85 pi=[57,85)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 188.133209229s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:21 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=83/84 n=6 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85 pruub=15.408908844s) [2] r=-1 lpr=85 pi=[57,85)/1 crt=63'487 lcod 63'486 unknown NOTIFY pruub 188.130508423s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:21 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec  4 05:18:22 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.c scrub starts
Dec  4 05:18:22 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.c scrub ok
Dec  4 05:18:22 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v194: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:18:22 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Dec  4 05:18:22 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Dec  4 05:18:22 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Dec  4 05:18:22 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec  4 05:18:22 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Dec  4 05:18:22 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Dec  4 05:18:22 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 86 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=85/86 n=6 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85) [2] r=0 lpr=85 pi=[57,85)/1 crt=63'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:18:22 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 86 pg[9.c( v 53'483 (0'0,53'483] local-lis/les=85/86 n=7 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85) [2] r=0 lpr=85 pi=[57,85)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:18:22 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Dec  4 05:18:22 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec  4 05:18:24 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Dec  4 05:18:24 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Dec  4 05:18:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:18:24 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Dec  4 05:18:24 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Dec  4 05:18:24 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v196: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:18:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Dec  4 05:18:24 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Dec  4 05:18:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Dec  4 05:18:24 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec  4 05:18:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Dec  4 05:18:24 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Dec  4 05:18:24 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Dec  4 05:18:25 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Dec  4 05:18:25 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Dec  4 05:18:25 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec  4 05:18:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:18:26
Dec  4 05:18:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:18:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:18:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['.rgw.root', 'backups', 'volumes', 'default.rgw.meta', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'images', 'default.rgw.log', 'cephfs.cephfs.data']
Dec  4 05:18:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:18:26 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v198: 321 pgs: 321 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 71 B/s, 2 objects/s recovering
Dec  4 05:18:26 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Dec  4 05:18:26 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Dec  4 05:18:26 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Dec  4 05:18:26 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Dec  4 05:18:26 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Dec  4 05:18:26 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Dec  4 05:18:26 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec  4 05:18:26 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Dec  4 05:18:26 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Dec  4 05:18:27 np0005545273 systemd-logind[798]: New session 35 of user zuul.
Dec  4 05:18:27 np0005545273 systemd[1]: Started Session 35 of User zuul.
Dec  4 05:18:27 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Dec  4 05:18:27 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Dec  4 05:18:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:18:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:18:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:18:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:18:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:18:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:18:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:18:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:18:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:18:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:18:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:18:27 np0005545273 python3.9[100696]: ansible-ansible.legacy.ping Invoked with data=pong
Dec  4 05:18:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:18:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:18:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:18:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:18:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:18:27 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec  4 05:18:28 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v200: 321 pgs: 321 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 2 objects/s recovering
Dec  4 05:18:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Dec  4 05:18:28 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Dec  4 05:18:28 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Dec  4 05:18:28 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Dec  4 05:18:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Dec  4 05:18:28 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec  4 05:18:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Dec  4 05:18:28 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Dec  4 05:18:28 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Dec  4 05:18:29 np0005545273 python3.9[100870]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:18:29 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.e scrub starts
Dec  4 05:18:29 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.e scrub ok
Dec  4 05:18:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:18:29 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Dec  4 05:18:29 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Dec  4 05:18:30 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec  4 05:18:30 np0005545273 python3.9[101026]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:18:30 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.f scrub starts
Dec  4 05:18:30 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.f scrub ok
Dec  4 05:18:30 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v202: 321 pgs: 321 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 2 objects/s recovering
Dec  4 05:18:30 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Dec  4 05:18:30 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Dec  4 05:18:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Dec  4 05:18:31 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec  4 05:18:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Dec  4 05:18:31 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Dec  4 05:18:31 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Dec  4 05:18:31 np0005545273 python3.9[101179]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:18:31 np0005545273 python3.9[101333]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:18:32 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec  4 05:18:32 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 6.f scrub starts
Dec  4 05:18:32 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 6.f scrub ok
Dec  4 05:18:32 np0005545273 python3.9[101487]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:18:32 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v204: 321 pgs: 321 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:18:32 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Dec  4 05:18:32 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Dec  4 05:18:33 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Dec  4 05:18:33 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Dec  4 05:18:33 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Dec  4 05:18:33 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec  4 05:18:33 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Dec  4 05:18:33 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 90 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=64/65 n=6 ec=57/47 lis/c=64/64 les/c/f=65/65/0 sis=90 pruub=9.780766487s) [2] r=-1 lpr=90 pi=[64,90)/1 crt=63'485 active pruub 200.290924072s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:33 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 90 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=64/65 n=6 ec=57/47 lis/c=64/64 les/c/f=65/65/0 sis=90 pruub=9.780334473s) [2] r=-1 lpr=90 pi=[64,90)/1 crt=63'485 unknown NOTIFY pruub 200.290924072s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:33 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 90 pg[9.13( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=64/64 les/c/f=65/65/0 sis=90) [2] r=0 lpr=90 pi=[64,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:33 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Dec  4 05:18:33 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Dec  4 05:18:33 np0005545273 python3.9[101637]: ansible-ansible.builtin.service_facts Invoked
Dec  4 05:18:33 np0005545273 network[101654]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  4 05:18:33 np0005545273 network[101655]: 'network-scripts' will be removed from distribution in near future.
Dec  4 05:18:33 np0005545273 network[101656]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  4 05:18:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Dec  4 05:18:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Dec  4 05:18:34 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Dec  4 05:18:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 92 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=64/65 n=6 ec=57/47 lis/c=64/64 les/c/f=65/65/0 sis=92) [2]/[0] r=0 lpr=92 pi=[64,92)/1 crt=63'485 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:34 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 92 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=64/65 n=6 ec=57/47 lis/c=64/64 les/c/f=65/65/0 sis=92) [2]/[0] r=0 lpr=92 pi=[64,92)/1 crt=63'485 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=64/64 les/c/f=65/65/0 sis=92) [2]/[0] r=-1 lpr=92 pi=[64,92)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:34 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=64/64 les/c/f=65/65/0 sis=92) [2]/[0] r=-1 lpr=92 pi=[64,92)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:34 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec  4 05:18:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:18:34 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v207: 321 pgs: 321 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:18:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Dec  4 05:18:34 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Dec  4 05:18:34 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.1c scrub starts
Dec  4 05:18:34 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.1c scrub ok
Dec  4 05:18:35 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Dec  4 05:18:35 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec  4 05:18:35 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Dec  4 05:18:35 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Dec  4 05:18:35 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 93 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=93 pruub=8.783335686s) [1] r=-1 lpr=93 pi=[65,93)/1 crt=53'483 active pruub 201.308776855s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:35 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 93 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=93 pruub=8.782843590s) [1] r=-1 lpr=93 pi=[65,93)/1 crt=53'483 unknown NOTIFY pruub 201.308776855s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:35 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 93 pg[9.15( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=93) [1] r=0 lpr=93 pi=[65,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:35 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Dec  4 05:18:35 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec  4 05:18:35 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 93 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=92/93 n=6 ec=57/47 lis/c=64/64 les/c/f=65/65/0 sis=92) [2]/[0] async=[2] r=0 lpr=92 pi=[64,92)/1 crt=63'485 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:18:35 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Dec  4 05:18:35 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Dec  4 05:18:36 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Dec  4 05:18:36 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Dec  4 05:18:36 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 94 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=0/0 n=6 ec=57/47 lis/c=92/64 les/c/f=93/65/0 sis=94) [2] r=0 lpr=94 pi=[64,94)/1 pct=0'0 crt=63'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:36 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 94 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=0/0 n=6 ec=57/47 lis/c=92/64 les/c/f=93/65/0 sis=94) [2] r=0 lpr=94 pi=[64,94)/1 crt=63'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:36 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Dec  4 05:18:36 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 94 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=92/93 n=6 ec=57/47 lis/c=92/64 les/c/f=93/65/0 sis=94 pruub=15.028572083s) [2] async=[2] r=-1 lpr=94 pi=[64,94)/1 crt=63'485 active pruub 208.567962646s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:36 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 94 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=92/93 n=6 ec=57/47 lis/c=92/64 les/c/f=93/65/0 sis=94 pruub=15.028485298s) [2] r=-1 lpr=94 pi=[64,94)/1 crt=63'485 unknown NOTIFY pruub 208.567962646s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:36 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 94 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=94) [1]/[0] r=0 lpr=94 pi=[65,94)/1 crt=53'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:36 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 94 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=94) [1]/[0] r=0 lpr=94 pi=[65,94)/1 crt=53'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:36 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[65,94)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:36 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[65,94)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:36 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Dec  4 05:18:36 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Dec  4 05:18:36 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v210: 321 pgs: 321 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:18:36 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Dec  4 05:18:36 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Dec  4 05:18:36 np0005545273 python3.9[101918]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:18:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:18:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:18:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:18:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:18:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:18:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:18:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:18:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:18:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:18:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:18:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:18:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:18:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0771607641612692e-06 of space, bias 4.0, pg target 0.001292592916993523 quantized to 16 (current 32)
Dec  4 05:18:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:18:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:18:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:18:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:18:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:18:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.324168131796575e-06 of space, bias 1.0, pg target 0.0012972504395389725 quantized to 32 (current 32)
Dec  4 05:18:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:18:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:18:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:18:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 05:18:37 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Dec  4 05:18:37 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec  4 05:18:37 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Dec  4 05:18:37 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Dec  4 05:18:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 95 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=73/74 n=6 ec=57/47 lis/c=73/73 les/c/f=74/74/0 sis=95 pruub=8.048609734s) [0] r=-1 lpr=95 pi=[73,95)/1 crt=53'483 active pruub 187.843475342s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 95 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=73/74 n=6 ec=57/47 lis/c=73/73 les/c/f=74/74/0 sis=95 pruub=8.048556328s) [0] r=-1 lpr=95 pi=[73,95)/1 crt=53'483 unknown NOTIFY pruub 187.843475342s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:37 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 95 pg[9.16( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=73/73 les/c/f=74/74/0 sis=95) [0] r=0 lpr=95 pi=[73,95)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:37 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 95 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=94/95 n=6 ec=57/47 lis/c=92/64 les/c/f=93/65/0 sis=94) [2] r=0 lpr=94 pi=[64,94)/1 crt=63'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:18:37 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Dec  4 05:18:37 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 95 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=94/95 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=94) [1]/[0] async=[1] r=0 lpr=94 pi=[65,94)/1 crt=53'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:18:37 np0005545273 python3.9[102068]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:18:38 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Dec  4 05:18:38 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Dec  4 05:18:38 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Dec  4 05:18:38 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v212: 321 pgs: 1 remapped+peering, 320 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:18:38 np0005545273 python3.9[102222]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:18:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Dec  4 05:18:39 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Dec  4 05:18:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:18:39 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 96 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=94/95 n=6 ec=57/47 lis/c=94/65 les/c/f=95/66/0 sis=96 pruub=14.043639183s) [1] async=[1] r=-1 lpr=96 pi=[65,96)/1 crt=53'483 active pruub 210.963851929s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:39 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 96 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=94/95 n=6 ec=57/47 lis/c=94/65 les/c/f=95/66/0 sis=96 pruub=14.043495178s) [1] r=-1 lpr=96 pi=[65,96)/1 crt=53'483 unknown NOTIFY pruub 210.963851929s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:39 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 96 pg[9.16( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=73/73 les/c/f=74/74/0 sis=96) [0]/[2] r=-1 lpr=96 pi=[73,96)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:39 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 96 pg[9.16( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=73/73 les/c/f=74/74/0 sis=96) [0]/[2] r=-1 lpr=96 pi=[73,96)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:39 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec  4 05:18:39 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 96 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=94/65 les/c/f=95/66/0 sis=96) [1] r=0 lpr=96 pi=[65,96)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:39 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 96 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=94/65 les/c/f=95/66/0 sis=96) [1] r=0 lpr=96 pi=[65,96)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:39 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 96 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=73/74 n=6 ec=57/47 lis/c=73/73 les/c/f=74/74/0 sis=96) [0]/[2] r=0 lpr=96 pi=[73,96)/1 crt=53'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:39 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 96 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=73/74 n=6 ec=57/47 lis/c=73/73 les/c/f=74/74/0 sis=96) [0]/[2] r=0 lpr=96 pi=[73,96)/1 crt=53'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:40 np0005545273 python3.9[102380]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  4 05:18:40 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Dec  4 05:18:40 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Dec  4 05:18:40 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Dec  4 05:18:40 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Dec  4 05:18:40 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v214: 321 pgs: 1 remapped+peering, 320 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:18:40 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Dec  4 05:18:40 np0005545273 python3.9[102464]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  4 05:18:41 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Dec  4 05:18:41 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Dec  4 05:18:41 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 97 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=96/97 n=6 ec=57/47 lis/c=94/65 les/c/f=95/66/0 sis=96) [1] r=0 lpr=96 pi=[65,96)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:18:41 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 97 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=96/97 n=6 ec=57/47 lis/c=73/73 les/c/f=74/74/0 sis=96) [0]/[2] async=[0] r=0 lpr=96 pi=[73,96)/1 crt=53'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:18:41 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.b scrub starts
Dec  4 05:18:41 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.b scrub ok
Dec  4 05:18:42 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Dec  4 05:18:42 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v216: 321 pgs: 1 remapped+peering, 320 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 61 B/s, 1 objects/s recovering
Dec  4 05:18:42 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Dec  4 05:18:42 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Dec  4 05:18:43 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Dec  4 05:18:43 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Dec  4 05:18:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 98 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=96/97 n=6 ec=57/47 lis/c=96/73 les/c/f=97/74/0 sis=98 pruub=14.271065712s) [0] async=[0] r=-1 lpr=98 pi=[73,98)/1 crt=53'483 active pruub 199.639892578s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:43 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 98 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=96/97 n=6 ec=57/47 lis/c=96/73 les/c/f=97/74/0 sis=98 pruub=14.270962715s) [0] r=-1 lpr=98 pi=[73,98)/1 crt=53'483 unknown NOTIFY pruub 199.639892578s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 98 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=96/73 les/c/f=97/74/0 sis=98) [0] r=0 lpr=98 pi=[73,98)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:43 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 98 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=96/73 les/c/f=97/74/0 sis=98) [0] r=0 lpr=98 pi=[73,98)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:43 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Dec  4 05:18:43 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Dec  4 05:18:43 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Dec  4 05:18:43 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Dec  4 05:18:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Dec  4 05:18:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Dec  4 05:18:44 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Dec  4 05:18:44 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 99 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=98/99 n=6 ec=57/47 lis/c=96/73 les/c/f=97/74/0 sis=98) [0] r=0 lpr=98 pi=[73,98)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:18:44 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v219: 321 pgs: 1 remapped+peering, 320 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 73 B/s, 1 objects/s recovering
Dec  4 05:18:44 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Dec  4 05:18:44 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Dec  4 05:18:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:18:45 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Dec  4 05:18:45 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Dec  4 05:18:46 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v220: 321 pgs: 1 remapped+peering, 320 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 61 B/s, 1 objects/s recovering
Dec  4 05:18:47 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Dec  4 05:18:47 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Dec  4 05:18:47 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.17 scrub starts
Dec  4 05:18:47 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.17 scrub ok
Dec  4 05:18:48 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Dec  4 05:18:48 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Dec  4 05:18:48 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v221: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 29 B/s, 1 objects/s recovering
Dec  4 05:18:48 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Dec  4 05:18:48 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Dec  4 05:18:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Dec  4 05:18:49 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Dec  4 05:18:49 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec  4 05:18:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Dec  4 05:18:49 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Dec  4 05:18:49 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Dec  4 05:18:49 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Dec  4 05:18:49 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Dec  4 05:18:49 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Dec  4 05:18:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:18:50 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec  4 05:18:50 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v223: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 0 objects/s recovering
Dec  4 05:18:50 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Dec  4 05:18:50 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Dec  4 05:18:51 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.a scrub starts
Dec  4 05:18:51 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.a scrub ok
Dec  4 05:18:51 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Dec  4 05:18:51 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Dec  4 05:18:51 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec  4 05:18:51 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Dec  4 05:18:51 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Dec  4 05:18:52 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Dec  4 05:18:52 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Dec  4 05:18:52 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec  4 05:18:52 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v225: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Dec  4 05:18:52 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Dec  4 05:18:52 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Dec  4 05:18:53 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Dec  4 05:18:53 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Dec  4 05:18:53 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Dec  4 05:18:53 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Dec  4 05:18:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Dec  4 05:18:53 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Dec  4 05:18:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec  4 05:18:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Dec  4 05:18:53 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Dec  4 05:18:53 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 102 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=102 pruub=14.467451096s) [2] r=-1 lpr=102 pi=[65,102)/1 crt=63'487 active pruub 225.304885864s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:53 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 102 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=102 pruub=14.467391014s) [2] r=-1 lpr=102 pi=[65,102)/1 crt=63'487 unknown NOTIFY pruub 225.304885864s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:53 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 102 pg[9.19( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=102) [2] r=0 lpr=102 pi=[65,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Dec  4 05:18:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Dec  4 05:18:54 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Dec  4 05:18:54 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 103 pg[9.19( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=103) [2]/[0] r=-1 lpr=103 pi=[65,103)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:54 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 103 pg[9.19( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=103) [2]/[0] r=-1 lpr=103 pi=[65,103)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:54 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec  4 05:18:54 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 103 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=103) [2]/[0] r=0 lpr=103 pi=[65,103)/1 crt=63'487 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:54 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 103 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=103) [2]/[0] r=0 lpr=103 pi=[65,103)/1 crt=63'487 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:54 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v228: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:18:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Dec  4 05:18:54 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Dec  4 05:18:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:18:55 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Dec  4 05:18:55 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Dec  4 05:18:55 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Dec  4 05:18:55 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Dec  4 05:18:55 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Dec  4 05:18:55 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec  4 05:18:55 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Dec  4 05:18:55 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Dec  4 05:18:55 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Dec  4 05:18:56 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Dec  4 05:18:56 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 104 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=103/104 n=6 ec=57/47 lis/c=65/65 les/c/f=66/66/0 sis=103) [2]/[0] async=[2] r=0 lpr=103 pi=[65,103)/1 crt=63'487 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:18:56 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Dec  4 05:18:56 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Dec  4 05:18:56 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Dec  4 05:18:56 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Dec  4 05:18:56 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Dec  4 05:18:56 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v231: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:18:56 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Dec  4 05:18:56 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Dec  4 05:18:56 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Dec  4 05:18:56 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 105 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=103/104 n=6 ec=57/47 lis/c=103/65 les/c/f=104/66/0 sis=105 pruub=15.487646103s) [2] async=[2] r=-1 lpr=105 pi=[65,105)/1 crt=63'487 active pruub 229.378829956s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:56 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec  4 05:18:56 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 105 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=103/104 n=6 ec=57/47 lis/c=103/65 les/c/f=104/66/0 sis=105 pruub=15.487515450s) [2] r=-1 lpr=105 pi=[65,105)/1 crt=63'487 unknown NOTIFY pruub 229.378829956s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:56 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 105 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=0/0 n=6 ec=57/47 lis/c=103/65 les/c/f=104/66/0 sis=105) [2] r=0 lpr=105 pi=[65,105)/1 pct=0'0 crt=63'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:56 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 105 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=0/0 n=6 ec=57/47 lis/c=103/65 les/c/f=104/66/0 sis=105) [2] r=0 lpr=105 pi=[65,105)/1 crt=63'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:18:57 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Dec  4 05:18:57 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Dec  4 05:18:57 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.c scrub starts
Dec  4 05:18:57 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.c scrub ok
Dec  4 05:18:57 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Dec  4 05:18:57 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec  4 05:18:57 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Dec  4 05:18:57 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Dec  4 05:18:57 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 106 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=105/106 n=6 ec=57/47 lis/c=103/65 les/c/f=104/66/0 sis=105) [2] r=0 lpr=105 pi=[65,105)/1 crt=63'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:18:57 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Dec  4 05:18:57 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec  4 05:18:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:18:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:18:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:18:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:18:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:18:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:18:58 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v233: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:18:58 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Dec  4 05:18:58 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Dec  4 05:18:58 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Dec  4 05:18:58 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec  4 05:18:58 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Dec  4 05:18:58 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Dec  4 05:18:58 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Dec  4 05:18:59 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Dec  4 05:18:59 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Dec  4 05:18:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:18:59 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 107 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=85/86 n=6 ec=57/47 lis/c=85/85 les/c/f=86/86/0 sis=107 pruub=10.985615730s) [0] r=-1 lpr=107 pi=[85,107)/1 crt=63'487 active pruub 213.138580322s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:18:59 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 107 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=85/86 n=6 ec=57/47 lis/c=85/85 les/c/f=86/86/0 sis=107 pruub=10.985573769s) [0] r=-1 lpr=107 pi=[85,107)/1 crt=63'487 unknown NOTIFY pruub 213.138580322s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:18:59 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec  4 05:18:59 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 107 pg[9.1c( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=85/85 les/c/f=86/86/0 sis=107) [0] r=0 lpr=107 pi=[85,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:19:00 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v235: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:19:00 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Dec  4 05:19:00 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Dec  4 05:19:00 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Dec  4 05:19:00 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec  4 05:19:00 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Dec  4 05:19:00 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Dec  4 05:19:00 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 108 pg[9.1c( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=85/85 les/c/f=86/86/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[85,108)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:19:00 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 108 pg[9.1c( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=85/85 les/c/f=86/86/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[85,108)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  4 05:19:00 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Dec  4 05:19:00 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 108 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=85/86 n=6 ec=57/47 lis/c=85/85 les/c/f=86/86/0 sis=108) [0]/[2] r=0 lpr=108 pi=[85,108)/1 crt=63'487 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:19:00 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 108 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=85/86 n=6 ec=57/47 lis/c=85/85 les/c/f=86/86/0 sis=108) [0]/[2] r=0 lpr=108 pi=[85,108)/1 crt=63'487 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  4 05:19:01 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Dec  4 05:19:01 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec  4 05:19:01 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Dec  4 05:19:01 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Dec  4 05:19:01 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 109 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=108/109 n=6 ec=57/47 lis/c=85/85 les/c/f=86/86/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[85,108)/1 crt=63'487 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:19:02 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Dec  4 05:19:02 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Dec  4 05:19:02 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v238: 321 pgs: 1 remapped+peering, 320 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 83 B/s, 1 objects/s recovering
Dec  4 05:19:02 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Dec  4 05:19:02 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Dec  4 05:19:02 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Dec  4 05:19:02 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 110 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=108/109 n=6 ec=57/47 lis/c=108/85 les/c/f=109/86/0 sis=110 pruub=15.003255844s) [0] async=[0] r=-1 lpr=110 pi=[85,110)/1 crt=63'487 active pruub 220.203842163s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:19:02 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 110 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=108/109 n=6 ec=57/47 lis/c=108/85 les/c/f=109/86/0 sis=110 pruub=15.003170967s) [0] r=-1 lpr=110 pi=[85,110)/1 crt=63'487 unknown NOTIFY pruub 220.203842163s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:19:02 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 110 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=0/0 n=6 ec=57/47 lis/c=108/85 les/c/f=109/86/0 sis=110) [0] r=0 lpr=110 pi=[85,110)/1 pct=0'0 crt=63'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:19:02 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 110 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=0/0 n=6 ec=57/47 lis/c=108/85 les/c/f=109/86/0 sis=110) [0] r=0 lpr=110 pi=[85,110)/1 crt=63'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:19:03 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Dec  4 05:19:03 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Dec  4 05:19:03 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Dec  4 05:19:03 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Dec  4 05:19:03 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Dec  4 05:19:03 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Dec  4 05:19:03 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Dec  4 05:19:03 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 111 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=110/111 n=6 ec=57/47 lis/c=108/85 les/c/f=109/86/0 sis=110) [0] r=0 lpr=110 pi=[85,110)/1 crt=63'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:19:04 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.c scrub starts
Dec  4 05:19:04 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.c scrub ok
Dec  4 05:19:04 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Dec  4 05:19:04 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Dec  4 05:19:04 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v241: 321 pgs: 1 peering, 320 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 208 B/s, 4 objects/s recovering
Dec  4 05:19:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:19:05 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Dec  4 05:19:05 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Dec  4 05:19:05 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.e scrub starts
Dec  4 05:19:05 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.e scrub ok
Dec  4 05:19:06 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Dec  4 05:19:06 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Dec  4 05:19:06 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:19:06 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:19:06 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:19:06 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:19:06 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:19:06 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:19:06 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:19:06 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:19:06 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:19:06 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:19:06 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:19:06 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:19:06 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v242: 321 pgs: 1 peering, 320 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 143 B/s, 3 objects/s recovering
Dec  4 05:19:06 np0005545273 podman[102728]: 2025-12-04 10:19:06.888262219 +0000 UTC m=+0.045671587 container create e8539c9a5c8a94a6884b75f1f01c170eb129bd3e964b183453923df083457a04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lewin, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  4 05:19:06 np0005545273 systemd[1]: Started libpod-conmon-e8539c9a5c8a94a6884b75f1f01c170eb129bd3e964b183453923df083457a04.scope.
Dec  4 05:19:06 np0005545273 podman[102728]: 2025-12-04 10:19:06.865712923 +0000 UTC m=+0.023122311 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:19:06 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:19:07 np0005545273 podman[102728]: 2025-12-04 10:19:07.00554695 +0000 UTC m=+0.162956358 container init e8539c9a5c8a94a6884b75f1f01c170eb129bd3e964b183453923df083457a04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lewin, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  4 05:19:07 np0005545273 podman[102728]: 2025-12-04 10:19:07.01423861 +0000 UTC m=+0.171647988 container start e8539c9a5c8a94a6884b75f1f01c170eb129bd3e964b183453923df083457a04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec  4 05:19:07 np0005545273 eager_lewin[102745]: 167 167
Dec  4 05:19:07 np0005545273 systemd[1]: libpod-e8539c9a5c8a94a6884b75f1f01c170eb129bd3e964b183453923df083457a04.scope: Deactivated successfully.
Dec  4 05:19:07 np0005545273 podman[102728]: 2025-12-04 10:19:07.02289765 +0000 UTC m=+0.180307028 container attach e8539c9a5c8a94a6884b75f1f01c170eb129bd3e964b183453923df083457a04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lewin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  4 05:19:07 np0005545273 podman[102728]: 2025-12-04 10:19:07.023762321 +0000 UTC m=+0.181171709 container died e8539c9a5c8a94a6884b75f1f01c170eb129bd3e964b183453923df083457a04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lewin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec  4 05:19:07 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:19:07 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:19:07 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:19:07 np0005545273 systemd[1]: var-lib-containers-storage-overlay-a838cb98d6f455be4aaeedb599f025adf799ce942c4e352b39080a004dcd7517-merged.mount: Deactivated successfully.
Dec  4 05:19:07 np0005545273 podman[102728]: 2025-12-04 10:19:07.077676886 +0000 UTC m=+0.235086264 container remove e8539c9a5c8a94a6884b75f1f01c170eb129bd3e964b183453923df083457a04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lewin, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:19:07 np0005545273 systemd[1]: libpod-conmon-e8539c9a5c8a94a6884b75f1f01c170eb129bd3e964b183453923df083457a04.scope: Deactivated successfully.
Dec  4 05:19:07 np0005545273 podman[102768]: 2025-12-04 10:19:07.268262032 +0000 UTC m=+0.057944504 container create 205ef6407cad8b02c325524ded1890c1b2fe734c9e663dbcf046665920ebf2ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_wescoff, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec  4 05:19:07 np0005545273 systemd[1]: Started libpod-conmon-205ef6407cad8b02c325524ded1890c1b2fe734c9e663dbcf046665920ebf2ba.scope.
Dec  4 05:19:07 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:19:07 np0005545273 podman[102768]: 2025-12-04 10:19:07.239264929 +0000 UTC m=+0.028947481 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:19:07 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e949e39faea1db8defce00be95769bb4db550e19586f2a1fd0ef5f6d6ab17b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:19:07 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e949e39faea1db8defce00be95769bb4db550e19586f2a1fd0ef5f6d6ab17b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:19:07 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e949e39faea1db8defce00be95769bb4db550e19586f2a1fd0ef5f6d6ab17b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:19:07 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e949e39faea1db8defce00be95769bb4db550e19586f2a1fd0ef5f6d6ab17b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:19:07 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e949e39faea1db8defce00be95769bb4db550e19586f2a1fd0ef5f6d6ab17b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:19:07 np0005545273 podman[102768]: 2025-12-04 10:19:07.353853695 +0000 UTC m=+0.143536167 container init 205ef6407cad8b02c325524ded1890c1b2fe734c9e663dbcf046665920ebf2ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_wescoff, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec  4 05:19:07 np0005545273 podman[102768]: 2025-12-04 10:19:07.361345897 +0000 UTC m=+0.151028369 container start 205ef6407cad8b02c325524ded1890c1b2fe734c9e663dbcf046665920ebf2ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_wescoff, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:19:07 np0005545273 podman[102768]: 2025-12-04 10:19:07.365636131 +0000 UTC m=+0.155318603 container attach 205ef6407cad8b02c325524ded1890c1b2fe734c9e663dbcf046665920ebf2ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_wescoff, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  4 05:19:07 np0005545273 intelligent_wescoff[102785]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:19:07 np0005545273 intelligent_wescoff[102785]: --> All data devices are unavailable
Dec  4 05:19:07 np0005545273 systemd[1]: libpod-205ef6407cad8b02c325524ded1890c1b2fe734c9e663dbcf046665920ebf2ba.scope: Deactivated successfully.
Dec  4 05:19:07 np0005545273 podman[102768]: 2025-12-04 10:19:07.859453909 +0000 UTC m=+0.649136381 container died 205ef6407cad8b02c325524ded1890c1b2fe734c9e663dbcf046665920ebf2ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True)
Dec  4 05:19:07 np0005545273 systemd[1]: var-lib-containers-storage-overlay-31e949e39faea1db8defce00be95769bb4db550e19586f2a1fd0ef5f6d6ab17b-merged.mount: Deactivated successfully.
Dec  4 05:19:07 np0005545273 podman[102768]: 2025-12-04 10:19:07.932231224 +0000 UTC m=+0.721913696 container remove 205ef6407cad8b02c325524ded1890c1b2fe734c9e663dbcf046665920ebf2ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_wescoff, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:19:07 np0005545273 systemd[1]: libpod-conmon-205ef6407cad8b02c325524ded1890c1b2fe734c9e663dbcf046665920ebf2ba.scope: Deactivated successfully.
Dec  4 05:19:08 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Dec  4 05:19:08 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Dec  4 05:19:08 np0005545273 podman[102880]: 2025-12-04 10:19:08.402964183 +0000 UTC m=+0.035028800 container create 731bded0713502e511f52241e9574de535493b1da042e32a7e00934160596a8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_swartz, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:19:08 np0005545273 systemd[1]: Started libpod-conmon-731bded0713502e511f52241e9574de535493b1da042e32a7e00934160596a8e.scope.
Dec  4 05:19:08 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:19:08 np0005545273 podman[102880]: 2025-12-04 10:19:08.387292682 +0000 UTC m=+0.019357319 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:19:08 np0005545273 podman[102880]: 2025-12-04 10:19:08.484777553 +0000 UTC m=+0.116842200 container init 731bded0713502e511f52241e9574de535493b1da042e32a7e00934160596a8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:19:08 np0005545273 podman[102880]: 2025-12-04 10:19:08.495106423 +0000 UTC m=+0.127171040 container start 731bded0713502e511f52241e9574de535493b1da042e32a7e00934160596a8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  4 05:19:08 np0005545273 podman[102880]: 2025-12-04 10:19:08.49910817 +0000 UTC m=+0.131172787 container attach 731bded0713502e511f52241e9574de535493b1da042e32a7e00934160596a8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_swartz, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  4 05:19:08 np0005545273 gallant_swartz[102895]: 167 167
Dec  4 05:19:08 np0005545273 podman[102880]: 2025-12-04 10:19:08.501721044 +0000 UTC m=+0.133785691 container died 731bded0713502e511f52241e9574de535493b1da042e32a7e00934160596a8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_swartz, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030)
Dec  4 05:19:08 np0005545273 systemd[1]: libpod-731bded0713502e511f52241e9574de535493b1da042e32a7e00934160596a8e.scope: Deactivated successfully.
Dec  4 05:19:08 np0005545273 systemd[1]: var-lib-containers-storage-overlay-c27fa89e14e2648e57303547ff29917aede5e9d2a385adc58d19e47e83e99dd4-merged.mount: Deactivated successfully.
Dec  4 05:19:08 np0005545273 podman[102880]: 2025-12-04 10:19:08.542001099 +0000 UTC m=+0.174065716 container remove 731bded0713502e511f52241e9574de535493b1da042e32a7e00934160596a8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_swartz, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  4 05:19:08 np0005545273 systemd[1]: libpod-conmon-731bded0713502e511f52241e9574de535493b1da042e32a7e00934160596a8e.scope: Deactivated successfully.
Dec  4 05:19:08 np0005545273 podman[102918]: 2025-12-04 10:19:08.713082442 +0000 UTC m=+0.051660712 container create 76b451c184d434250ce0637190dc1c55727792835ab02c8ed8d7f8cff7313490 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:19:08 np0005545273 systemd[1]: Started libpod-conmon-76b451c184d434250ce0637190dc1c55727792835ab02c8ed8d7f8cff7313490.scope.
Dec  4 05:19:08 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v243: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 61 B/s, 1 objects/s recovering
Dec  4 05:19:08 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Dec  4 05:19:08 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Dec  4 05:19:08 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:19:08 np0005545273 podman[102918]: 2025-12-04 10:19:08.69355262 +0000 UTC m=+0.032130920 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:19:08 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a65cb8f86156dd6c58fc3d6ca76d3df1f396e6e5bc771282ac38ecd737b95bf1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:19:08 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a65cb8f86156dd6c58fc3d6ca76d3df1f396e6e5bc771282ac38ecd737b95bf1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:19:08 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a65cb8f86156dd6c58fc3d6ca76d3df1f396e6e5bc771282ac38ecd737b95bf1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:19:08 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a65cb8f86156dd6c58fc3d6ca76d3df1f396e6e5bc771282ac38ecd737b95bf1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:19:08 np0005545273 podman[102918]: 2025-12-04 10:19:08.813958305 +0000 UTC m=+0.152536675 container init 76b451c184d434250ce0637190dc1c55727792835ab02c8ed8d7f8cff7313490 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_tesla, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:19:08 np0005545273 podman[102918]: 2025-12-04 10:19:08.829117323 +0000 UTC m=+0.167695593 container start 76b451c184d434250ce0637190dc1c55727792835ab02c8ed8d7f8cff7313490 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_tesla, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Dec  4 05:19:08 np0005545273 podman[102918]: 2025-12-04 10:19:08.833015787 +0000 UTC m=+0.171594167 container attach 76b451c184d434250ce0637190dc1c55727792835ab02c8ed8d7f8cff7313490 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec  4 05:19:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Dec  4 05:19:09 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec  4 05:19:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Dec  4 05:19:09 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Dec  4 05:19:09 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Dec  4 05:19:09 np0005545273 boring_tesla[102935]: {
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:    "0": [
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:        {
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            "devices": [
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "/dev/loop3"
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            ],
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            "lv_name": "ceph_lv0",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            "lv_size": "21470642176",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            "name": "ceph_lv0",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            "tags": {
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.cluster_name": "ceph",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.crush_device_class": "",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.encrypted": "0",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.objectstore": "bluestore",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.osd_id": "0",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.type": "block",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.vdo": "0",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.with_tpm": "0"
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            },
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            "type": "block",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            "vg_name": "ceph_vg0"
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:        }
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:    ],
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:    "1": [
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:        {
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            "devices": [
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "/dev/loop4"
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            ],
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            "lv_name": "ceph_lv1",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            "lv_size": "21470642176",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            "name": "ceph_lv1",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            "tags": {
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.cluster_name": "ceph",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.crush_device_class": "",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.encrypted": "0",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.objectstore": "bluestore",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.osd_id": "1",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.type": "block",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.vdo": "0",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.with_tpm": "0"
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            },
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            "type": "block",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            "vg_name": "ceph_vg1"
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:        }
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:    ],
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:    "2": [
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:        {
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            "devices": [
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "/dev/loop5"
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            ],
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            "lv_name": "ceph_lv2",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            "lv_size": "21470642176",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            "name": "ceph_lv2",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            "tags": {
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.cluster_name": "ceph",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.crush_device_class": "",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.encrypted": "0",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.objectstore": "bluestore",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.osd_id": "2",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.type": "block",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.vdo": "0",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:                "ceph.with_tpm": "0"
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            },
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            "type": "block",
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:            "vg_name": "ceph_vg2"
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:        }
Dec  4 05:19:09 np0005545273 boring_tesla[102935]:    ]
Dec  4 05:19:09 np0005545273 boring_tesla[102935]: }
Dec  4 05:19:09 np0005545273 systemd[1]: libpod-76b451c184d434250ce0637190dc1c55727792835ab02c8ed8d7f8cff7313490.scope: Deactivated successfully.
Dec  4 05:19:09 np0005545273 podman[102918]: 2025-12-04 10:19:09.149938252 +0000 UTC m=+0.488516542 container died 76b451c184d434250ce0637190dc1c55727792835ab02c8ed8d7f8cff7313490 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS)
Dec  4 05:19:09 np0005545273 systemd[1]: var-lib-containers-storage-overlay-a65cb8f86156dd6c58fc3d6ca76d3df1f396e6e5bc771282ac38ecd737b95bf1-merged.mount: Deactivated successfully.
Dec  4 05:19:09 np0005545273 podman[102918]: 2025-12-04 10:19:09.200500447 +0000 UTC m=+0.539078707 container remove 76b451c184d434250ce0637190dc1c55727792835ab02c8ed8d7f8cff7313490 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_tesla, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:19:09 np0005545273 systemd[1]: libpod-conmon-76b451c184d434250ce0637190dc1c55727792835ab02c8ed8d7f8cff7313490.scope: Deactivated successfully.
Dec  4 05:19:09 np0005545273 podman[103020]: 2025-12-04 10:19:09.661715357 +0000 UTC m=+0.038142846 container create 8559abde0dbd5410dbc44b9e8b5d905d9645bc9a4760d5be3fed1761f372ea3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  4 05:19:09 np0005545273 systemd[1]: Started libpod-conmon-8559abde0dbd5410dbc44b9e8b5d905d9645bc9a4760d5be3fed1761f372ea3c.scope.
Dec  4 05:19:09 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:19:09 np0005545273 podman[103020]: 2025-12-04 10:19:09.738624109 +0000 UTC m=+0.115051618 container init 8559abde0dbd5410dbc44b9e8b5d905d9645bc9a4760d5be3fed1761f372ea3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hertz, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  4 05:19:09 np0005545273 podman[103020]: 2025-12-04 10:19:09.643503036 +0000 UTC m=+0.019930545 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:19:09 np0005545273 podman[103020]: 2025-12-04 10:19:09.744364228 +0000 UTC m=+0.120791717 container start 8559abde0dbd5410dbc44b9e8b5d905d9645bc9a4760d5be3fed1761f372ea3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  4 05:19:09 np0005545273 podman[103020]: 2025-12-04 10:19:09.748481068 +0000 UTC m=+0.124908577 container attach 8559abde0dbd5410dbc44b9e8b5d905d9645bc9a4760d5be3fed1761f372ea3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hertz, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:19:09 np0005545273 pensive_hertz[103037]: 167 167
Dec  4 05:19:09 np0005545273 systemd[1]: libpod-8559abde0dbd5410dbc44b9e8b5d905d9645bc9a4760d5be3fed1761f372ea3c.scope: Deactivated successfully.
Dec  4 05:19:09 np0005545273 podman[103020]: 2025-12-04 10:19:09.750919586 +0000 UTC m=+0.127347075 container died 8559abde0dbd5410dbc44b9e8b5d905d9645bc9a4760d5be3fed1761f372ea3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hertz, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec  4 05:19:09 np0005545273 systemd[1]: var-lib-containers-storage-overlay-6214015c1567bb69c76d3fda0e3dbdcd850b8eb5c98d551abbcb0262627dedd4-merged.mount: Deactivated successfully.
Dec  4 05:19:09 np0005545273 podman[103020]: 2025-12-04 10:19:09.785664098 +0000 UTC m=+0.162091587 container remove 8559abde0dbd5410dbc44b9e8b5d905d9645bc9a4760d5be3fed1761f372ea3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  4 05:19:09 np0005545273 systemd[1]: libpod-conmon-8559abde0dbd5410dbc44b9e8b5d905d9645bc9a4760d5be3fed1761f372ea3c.scope: Deactivated successfully.
Dec  4 05:19:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:19:09 np0005545273 podman[103062]: 2025-12-04 10:19:09.928872416 +0000 UTC m=+0.040129552 container create f1fe5a1b2c9206c0c00a000bee1e4886afd298263c23f5e98798f1241643f1ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_poincare, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  4 05:19:09 np0005545273 systemd[1]: Started libpod-conmon-f1fe5a1b2c9206c0c00a000bee1e4886afd298263c23f5e98798f1241643f1ae.scope.
Dec  4 05:19:10 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:19:10 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4908df7a9560be4283324df8167c83d7f5b3a7014f61518953708e2d388abeea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:19:10 np0005545273 podman[103062]: 2025-12-04 10:19:09.90923993 +0000 UTC m=+0.020497076 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:19:10 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4908df7a9560be4283324df8167c83d7f5b3a7014f61518953708e2d388abeea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:19:10 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4908df7a9560be4283324df8167c83d7f5b3a7014f61518953708e2d388abeea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:19:10 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4908df7a9560be4283324df8167c83d7f5b3a7014f61518953708e2d388abeea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:19:10 np0005545273 podman[103062]: 2025-12-04 10:19:10.018266711 +0000 UTC m=+0.129523897 container init f1fe5a1b2c9206c0c00a000bee1e4886afd298263c23f5e98798f1241643f1ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_poincare, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:19:10 np0005545273 podman[103062]: 2025-12-04 10:19:10.027029644 +0000 UTC m=+0.138286780 container start f1fe5a1b2c9206c0c00a000bee1e4886afd298263c23f5e98798f1241643f1ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_poincare, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:19:10 np0005545273 podman[103062]: 2025-12-04 10:19:10.043454371 +0000 UTC m=+0.154711517 container attach f1fe5a1b2c9206c0c00a000bee1e4886afd298263c23f5e98798f1241643f1ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec  4 05:19:10 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec  4 05:19:10 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Dec  4 05:19:10 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Dec  4 05:19:10 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Dec  4 05:19:10 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Dec  4 05:19:10 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v245: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 53 B/s, 1 objects/s recovering
Dec  4 05:19:10 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  4 05:19:10 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec  4 05:19:10 np0005545273 lvm[103155]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:19:10 np0005545273 lvm[103155]: VG ceph_vg0 finished
Dec  4 05:19:10 np0005545273 lvm[103157]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:19:10 np0005545273 lvm[103157]: VG ceph_vg1 finished
Dec  4 05:19:10 np0005545273 lvm[103159]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:19:10 np0005545273 lvm[103159]: VG ceph_vg2 finished
Dec  4 05:19:10 np0005545273 silly_poincare[103078]: {}
Dec  4 05:19:10 np0005545273 systemd[1]: libpod-f1fe5a1b2c9206c0c00a000bee1e4886afd298263c23f5e98798f1241643f1ae.scope: Deactivated successfully.
Dec  4 05:19:10 np0005545273 podman[103062]: 2025-12-04 10:19:10.926666941 +0000 UTC m=+1.037924067 container died f1fe5a1b2c9206c0c00a000bee1e4886afd298263c23f5e98798f1241643f1ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_poincare, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec  4 05:19:10 np0005545273 systemd[1]: libpod-f1fe5a1b2c9206c0c00a000bee1e4886afd298263c23f5e98798f1241643f1ae.scope: Consumed 1.370s CPU time.
Dec  4 05:19:10 np0005545273 systemd[1]: var-lib-containers-storage-overlay-4908df7a9560be4283324df8167c83d7f5b3a7014f61518953708e2d388abeea-merged.mount: Deactivated successfully.
Dec  4 05:19:10 np0005545273 podman[103062]: 2025-12-04 10:19:10.977280917 +0000 UTC m=+1.088538043 container remove f1fe5a1b2c9206c0c00a000bee1e4886afd298263c23f5e98798f1241643f1ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_poincare, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  4 05:19:11 np0005545273 systemd[1]: libpod-conmon-f1fe5a1b2c9206c0c00a000bee1e4886afd298263c23f5e98798f1241643f1ae.scope: Deactivated successfully.
Dec  4 05:19:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:19:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:19:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:19:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:19:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Dec  4 05:19:11 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec  4 05:19:11 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:19:11 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:19:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  4 05:19:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Dec  4 05:19:11 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Dec  4 05:19:11 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 113 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=75/76 n=6 ec=57/47 lis/c=75/75 les/c/f=76/76/0 sis=113 pruub=8.574896812s) [1] r=-1 lpr=113 pi=[75,113)/1 crt=53'483 active pruub 221.872146606s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:19:11 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 113 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=75/76 n=6 ec=57/47 lis/c=75/75 les/c/f=76/76/0 sis=113 pruub=8.574784279s) [1] r=-1 lpr=113 pi=[75,113)/1 crt=53'483 unknown NOTIFY pruub 221.872146606s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:19:11 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 112 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=73/74 n=6 ec=57/47 lis/c=73/73 les/c/f=74/74/0 sis=112 pruub=14.546545029s) [0] r=-1 lpr=112 pi=[73,112)/1 crt=63'485 active pruub 227.844512939s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:19:11 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 113 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=73/74 n=6 ec=57/47 lis/c=73/73 les/c/f=74/74/0 sis=112 pruub=14.546380043s) [0] r=-1 lpr=112 pi=[73,112)/1 crt=63'485 unknown NOTIFY pruub 227.844512939s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:19:11 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 113 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=75/75 les/c/f=76/76/0 sis=113) [1] r=0 lpr=113 pi=[75,113)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:19:11 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 113 pg[9.1e( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=73/73 les/c/f=74/74/0 sis=112) [0] r=0 lpr=113 pi=[73,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:19:12 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  4 05:19:12 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Dec  4 05:19:12 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Dec  4 05:19:12 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Dec  4 05:19:12 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 114 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=75/75 les/c/f=76/76/0 sis=114) [1]/[2] r=-1 lpr=114 pi=[75,114)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:19:12 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 114 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=75/75 les/c/f=76/76/0 sis=114) [1]/[2] r=-1 lpr=114 pi=[75,114)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  4 05:19:12 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=73/73 les/c/f=74/74/0 sis=114) [0]/[2] r=-1 lpr=114 pi=[73,114)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:19:12 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=73/73 les/c/f=74/74/0 sis=114) [0]/[2] r=-1 lpr=114 pi=[73,114)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  4 05:19:12 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 114 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=75/76 n=6 ec=57/47 lis/c=75/75 les/c/f=76/76/0 sis=114) [1]/[2] r=0 lpr=114 pi=[75,114)/1 crt=53'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:19:12 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 114 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=73/74 n=6 ec=57/47 lis/c=73/73 les/c/f=74/74/0 sis=114) [0]/[2] r=0 lpr=114 pi=[73,114)/1 crt=63'485 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:19:12 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 114 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=73/74 n=6 ec=57/47 lis/c=73/73 les/c/f=74/74/0 sis=114) [0]/[2] r=0 lpr=114 pi=[73,114)/1 crt=63'485 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  4 05:19:12 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 114 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=75/76 n=6 ec=57/47 lis/c=75/75 les/c/f=76/76/0 sis=114) [1]/[2] r=0 lpr=114 pi=[75,114)/1 crt=53'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  4 05:19:12 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Dec  4 05:19:12 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Dec  4 05:19:12 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v248: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:19:13 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Dec  4 05:19:13 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Dec  4 05:19:13 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Dec  4 05:19:13 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 115 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=114/115 n=6 ec=57/47 lis/c=75/75 les/c/f=76/76/0 sis=114) [1]/[2] async=[1] r=0 lpr=114 pi=[75,114)/1 crt=53'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:19:13 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 115 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=114/115 n=6 ec=57/47 lis/c=73/73 les/c/f=74/74/0 sis=114) [0]/[2] async=[0] r=0 lpr=114 pi=[73,114)/1 crt=63'485 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:19:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Dec  4 05:19:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Dec  4 05:19:14 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Dec  4 05:19:14 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 116 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=0/0 n=6 ec=57/47 lis/c=114/73 les/c/f=115/74/0 sis=116) [0] r=0 lpr=116 pi=[73,116)/1 pct=0'0 crt=63'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:19:14 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 116 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=0/0 n=6 ec=57/47 lis/c=114/73 les/c/f=115/74/0 sis=116) [0] r=0 lpr=116 pi=[73,116)/1 crt=63'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:19:14 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 116 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=114/115 n=6 ec=57/47 lis/c=114/73 les/c/f=115/74/0 sis=116 pruub=15.236147881s) [0] async=[0] r=-1 lpr=116 pi=[73,116)/1 crt=63'485 active pruub 231.565338135s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:19:14 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 116 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=114/115 n=6 ec=57/47 lis/c=114/73 les/c/f=115/74/0 sis=116 pruub=15.236060143s) [0] r=-1 lpr=116 pi=[73,116)/1 crt=63'485 unknown NOTIFY pruub 231.565338135s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:19:14 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 116 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=114/115 n=6 ec=57/47 lis/c=114/75 les/c/f=115/76/0 sis=116 pruub=14.986249924s) [1] async=[1] r=-1 lpr=116 pi=[75,116)/1 crt=53'483 active pruub 231.316085815s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:19:14 np0005545273 ceph-osd[88205]: osd.2 pg_epoch: 116 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=114/115 n=6 ec=57/47 lis/c=114/75 les/c/f=115/76/0 sis=116 pruub=14.986185074s) [1] r=-1 lpr=116 pi=[75,116)/1 crt=53'483 unknown NOTIFY pruub 231.316085815s@ mbc={}] state<Start>: transitioning to Stray
Dec  4 05:19:14 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 116 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=114/75 les/c/f=115/76/0 sis=116) [1] r=0 lpr=116 pi=[75,116)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  4 05:19:14 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 116 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=114/75 les/c/f=115/76/0 sis=116) [1] r=0 lpr=116 pi=[75,116)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  4 05:19:14 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Dec  4 05:19:14 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Dec  4 05:19:14 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v251: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:19:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:19:15 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Dec  4 05:19:15 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Dec  4 05:19:15 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Dec  4 05:19:15 np0005545273 ceph-osd[86021]: osd.0 pg_epoch: 117 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=116/117 n=6 ec=57/47 lis/c=114/73 les/c/f=115/74/0 sis=116) [0] r=0 lpr=116 pi=[73,116)/1 crt=63'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:19:15 np0005545273 ceph-osd[87071]: osd.1 pg_epoch: 117 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=116/117 n=6 ec=57/47 lis/c=114/75 les/c/f=115/76/0 sis=116) [1] r=0 lpr=116 pi=[75,116)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  4 05:19:15 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Dec  4 05:19:15 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Dec  4 05:19:16 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v253: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:19:18 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.b scrub starts
Dec  4 05:19:18 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.b scrub ok
Dec  4 05:19:18 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Dec  4 05:19:18 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Dec  4 05:19:18 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.b scrub starts
Dec  4 05:19:18 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.b scrub ok
Dec  4 05:19:18 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v254: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 43 B/s, 1 objects/s recovering
Dec  4 05:19:19 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:19:20 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 8.d scrub starts
Dec  4 05:19:20 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 8.d scrub ok
Dec  4 05:19:20 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Dec  4 05:19:20 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Dec  4 05:19:20 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v255: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 34 B/s, 1 objects/s recovering
Dec  4 05:19:21 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Dec  4 05:19:21 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Dec  4 05:19:22 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v256: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 30 B/s, 1 objects/s recovering
Dec  4 05:19:24 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Dec  4 05:19:24 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Dec  4 05:19:24 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v257: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 26 B/s, 1 objects/s recovering
Dec  4 05:19:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:19:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:19:26
Dec  4 05:19:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:19:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:19:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['images', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'backups', '.mgr', 'default.rgw.meta', 'vms']
Dec  4 05:19:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:19:26 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v258: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 0 objects/s recovering
Dec  4 05:19:27 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Dec  4 05:19:27 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Dec  4 05:19:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:19:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:19:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:19:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:19:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:19:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:19:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:19:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:19:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:19:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:19:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:19:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:19:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:19:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:19:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:19:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:19:28 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v259: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 21 B/s, 0 objects/s recovering
Dec  4 05:19:29 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Dec  4 05:19:29 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Dec  4 05:19:29 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Dec  4 05:19:29 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Dec  4 05:19:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:19:30 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Dec  4 05:19:30 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Dec  4 05:19:30 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v260: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:19:31 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Dec  4 05:19:31 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Dec  4 05:19:32 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v261: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:19:34 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Dec  4 05:19:34 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Dec  4 05:19:34 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v262: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:19:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:19:35 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Dec  4 05:19:35 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Dec  4 05:19:36 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v263: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:19:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:19:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:19:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:19:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:19:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:19:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:19:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:19:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:19:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:19:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:19:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:19:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:19:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec  4 05:19:36 np0005545273 python3.9[103377]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:19:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:19:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:19:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:19:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:19:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:19:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 05:19:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:19:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:19:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:19:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 05:19:37 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Dec  4 05:19:37 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Dec  4 05:19:38 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Dec  4 05:19:38 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Dec  4 05:19:38 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Dec  4 05:19:38 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Dec  4 05:19:38 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v264: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:19:38 np0005545273 python3.9[103666]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec  4 05:19:39 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Dec  4 05:19:39 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Dec  4 05:19:39 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Dec  4 05:19:39 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Dec  4 05:19:39 np0005545273 python3.9[103818]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec  4 05:19:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:19:40 np0005545273 python3.9[103970]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:19:40 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 3.c scrub starts
Dec  4 05:19:40 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 3.c scrub ok
Dec  4 05:19:40 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v265: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:19:41 np0005545273 python3.9[104122]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec  4 05:19:41 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Dec  4 05:19:41 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Dec  4 05:19:41 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Dec  4 05:19:41 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Dec  4 05:19:41 np0005545273 systemd[76741]: Created slice User Background Tasks Slice.
Dec  4 05:19:41 np0005545273 systemd[76741]: Starting Cleanup of User's Temporary Files and Directories...
Dec  4 05:19:41 np0005545273 systemd[76741]: Finished Cleanup of User's Temporary Files and Directories.
Dec  4 05:19:42 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Dec  4 05:19:42 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Dec  4 05:19:42 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Dec  4 05:19:42 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Dec  4 05:19:42 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v266: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:19:43 np0005545273 python3.9[104275]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:19:43 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Dec  4 05:19:43 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Dec  4 05:19:43 np0005545273 python3.9[104427]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:19:44 np0005545273 python3.9[104505]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:19:44 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v267: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:19:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:19:45 np0005545273 python3.9[104657]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:19:45 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Dec  4 05:19:45 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Dec  4 05:19:45 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 7.f scrub starts
Dec  4 05:19:45 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 7.f scrub ok
Dec  4 05:19:46 np0005545273 python3.9[104811]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec  4 05:19:46 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v268: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:19:46 np0005545273 python3.9[104964]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec  4 05:19:47 np0005545273 python3.9[105117]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  4 05:19:48 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Dec  4 05:19:48 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Dec  4 05:19:48 np0005545273 python3.9[105269]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec  4 05:19:48 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v269: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:19:49 np0005545273 python3.9[105421]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  4 05:19:49 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Dec  4 05:19:49 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Dec  4 05:19:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:19:50 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Dec  4 05:19:50 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Dec  4 05:19:50 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v270: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:19:51 np0005545273 python3.9[105574]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:19:51 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Dec  4 05:19:51 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Dec  4 05:19:51 np0005545273 python3.9[105726]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:19:52 np0005545273 python3.9[105804]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:19:52 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Dec  4 05:19:52 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Dec  4 05:19:52 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v271: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:19:52 np0005545273 python3.9[105956]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:19:53 np0005545273 python3.9[106034]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:19:53 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Dec  4 05:19:53 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Dec  4 05:19:54 np0005545273 python3.9[106186]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  4 05:19:54 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Dec  4 05:19:54 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Dec  4 05:19:54 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v272: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:19:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:19:56 np0005545273 python3.9[106339]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:19:56 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 3.f scrub starts
Dec  4 05:19:56 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 3.f scrub ok
Dec  4 05:19:56 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v273: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:19:57 np0005545273 python3.9[106491]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec  4 05:19:57 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Dec  4 05:19:57 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Dec  4 05:19:57 np0005545273 python3.9[106641]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:19:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:19:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:19:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:19:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:19:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:19:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:19:58 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Dec  4 05:19:58 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Dec  4 05:19:58 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Dec  4 05:19:58 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Dec  4 05:19:58 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v274: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:19:58 np0005545273 python3.9[106793]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:19:58 np0005545273 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec  4 05:19:59 np0005545273 systemd[1]: tuned.service: Deactivated successfully.
Dec  4 05:19:59 np0005545273 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec  4 05:19:59 np0005545273 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  4 05:19:59 np0005545273 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  4 05:19:59 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Dec  4 05:19:59 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Dec  4 05:19:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:19:59 np0005545273 python3.9[106954]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec  4 05:20:00 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Dec  4 05:20:00 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Dec  4 05:20:00 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Dec  4 05:20:00 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Dec  4 05:20:00 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v275: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:20:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.d scrub starts
Dec  4 05:20:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.d scrub ok
Dec  4 05:20:02 np0005545273 python3.9[107106]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:20:02 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.f scrub starts
Dec  4 05:20:02 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.f scrub ok
Dec  4 05:20:02 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v276: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:20:02 np0005545273 python3.9[107260]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:20:03 np0005545273 systemd[1]: session-35.scope: Deactivated successfully.
Dec  4 05:20:03 np0005545273 systemd[1]: session-35.scope: Consumed 1min 6.029s CPU time.
Dec  4 05:20:03 np0005545273 systemd-logind[798]: Session 35 logged out. Waiting for processes to exit.
Dec  4 05:20:03 np0005545273 systemd-logind[798]: Removed session 35.
Dec  4 05:20:03 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Dec  4 05:20:03 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Dec  4 05:20:04 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.d scrub starts
Dec  4 05:20:04 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.d scrub ok
Dec  4 05:20:04 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v277: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:20:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:20:05 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.1e scrub starts
Dec  4 05:20:05 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.1e scrub ok
Dec  4 05:20:06 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Dec  4 05:20:06 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Dec  4 05:20:06 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v278: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:20:07 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Dec  4 05:20:07 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Dec  4 05:20:08 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Dec  4 05:20:08 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Dec  4 05:20:08 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v279: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:20:09 np0005545273 systemd-logind[798]: New session 36 of user zuul.
Dec  4 05:20:09 np0005545273 systemd[1]: Started Session 36 of User zuul.
Dec  4 05:20:09 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Dec  4 05:20:09 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Dec  4 05:20:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:20:10 np0005545273 python3.9[107440]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:20:10 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v280: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:20:11 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.d scrub starts
Dec  4 05:20:11 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 11.d scrub ok
Dec  4 05:20:11 np0005545273 python3.9[107596]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec  4 05:20:11 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.c scrub starts
Dec  4 05:20:11 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.c scrub ok
Dec  4 05:20:11 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.f scrub starts
Dec  4 05:20:11 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.f scrub ok
Dec  4 05:20:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:20:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:20:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:20:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:20:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:20:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:20:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:20:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:20:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:20:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:20:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:20:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:20:12 np0005545273 python3.9[107830]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  4 05:20:12 np0005545273 podman[107898]: 2025-12-04 10:20:12.124610669 +0000 UTC m=+0.021783335 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:20:12 np0005545273 podman[107898]: 2025-12-04 10:20:12.23266565 +0000 UTC m=+0.129838296 container create 14aea3ac1860fa37c5cb60d9cee99fc3709e9d2fe6a49dec75c6745a1fbce645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_borg, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:20:12 np0005545273 systemd[1]: Started libpod-conmon-14aea3ac1860fa37c5cb60d9cee99fc3709e9d2fe6a49dec75c6745a1fbce645.scope.
Dec  4 05:20:12 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:20:12 np0005545273 podman[107898]: 2025-12-04 10:20:12.318428786 +0000 UTC m=+0.215601462 container init 14aea3ac1860fa37c5cb60d9cee99fc3709e9d2fe6a49dec75c6745a1fbce645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:20:12 np0005545273 podman[107898]: 2025-12-04 10:20:12.326839929 +0000 UTC m=+0.224012575 container start 14aea3ac1860fa37c5cb60d9cee99fc3709e9d2fe6a49dec75c6745a1fbce645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_borg, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:20:12 np0005545273 sweet_borg[107918]: 167 167
Dec  4 05:20:12 np0005545273 systemd[1]: libpod-14aea3ac1860fa37c5cb60d9cee99fc3709e9d2fe6a49dec75c6745a1fbce645.scope: Deactivated successfully.
Dec  4 05:20:12 np0005545273 conmon[107918]: conmon 14aea3ac1860fa37c5cb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-14aea3ac1860fa37c5cb60d9cee99fc3709e9d2fe6a49dec75c6745a1fbce645.scope/container/memory.events
Dec  4 05:20:12 np0005545273 podman[107898]: 2025-12-04 10:20:12.345514305 +0000 UTC m=+0.242686951 container attach 14aea3ac1860fa37c5cb60d9cee99fc3709e9d2fe6a49dec75c6745a1fbce645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_borg, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:20:12 np0005545273 podman[107898]: 2025-12-04 10:20:12.348595563 +0000 UTC m=+0.245768209 container died 14aea3ac1860fa37c5cb60d9cee99fc3709e9d2fe6a49dec75c6745a1fbce645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_borg, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:20:12 np0005545273 systemd[1]: var-lib-containers-storage-overlay-3ddc2746033e647a4b9df2101c0d2fb0c30b90c65e7a5daa0e5701d4b64152dc-merged.mount: Deactivated successfully.
Dec  4 05:20:12 np0005545273 podman[107898]: 2025-12-04 10:20:12.404005288 +0000 UTC m=+0.301177934 container remove 14aea3ac1860fa37c5cb60d9cee99fc3709e9d2fe6a49dec75c6745a1fbce645 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_borg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Dec  4 05:20:12 np0005545273 systemd[1]: libpod-conmon-14aea3ac1860fa37c5cb60d9cee99fc3709e9d2fe6a49dec75c6745a1fbce645.scope: Deactivated successfully.
Dec  4 05:20:12 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.c scrub starts
Dec  4 05:20:12 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:20:12 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:20:12 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:20:12 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.c scrub ok
Dec  4 05:20:12 np0005545273 podman[107944]: 2025-12-04 10:20:12.555441642 +0000 UTC m=+0.040224055 container create f492f42ad8cbc70b3c0cb3bb717addd8cb108da84bc9e0e2891d980f74c08589 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chatterjee, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  4 05:20:12 np0005545273 systemd[1]: Started libpod-conmon-f492f42ad8cbc70b3c0cb3bb717addd8cb108da84bc9e0e2891d980f74c08589.scope.
Dec  4 05:20:12 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:20:12 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1ccf6c73c283e343df62470c99e82bb91a8adb168d1248d823aa9fed3ef4243/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:20:12 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1ccf6c73c283e343df62470c99e82bb91a8adb168d1248d823aa9fed3ef4243/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:20:12 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1ccf6c73c283e343df62470c99e82bb91a8adb168d1248d823aa9fed3ef4243/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:20:12 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1ccf6c73c283e343df62470c99e82bb91a8adb168d1248d823aa9fed3ef4243/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:20:12 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1ccf6c73c283e343df62470c99e82bb91a8adb168d1248d823aa9fed3ef4243/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:20:12 np0005545273 podman[107944]: 2025-12-04 10:20:12.632167142 +0000 UTC m=+0.116949565 container init f492f42ad8cbc70b3c0cb3bb717addd8cb108da84bc9e0e2891d980f74c08589 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec  4 05:20:12 np0005545273 podman[107944]: 2025-12-04 10:20:12.5387775 +0000 UTC m=+0.023559943 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:20:12 np0005545273 podman[107944]: 2025-12-04 10:20:12.642110098 +0000 UTC m=+0.126892681 container start f492f42ad8cbc70b3c0cb3bb717addd8cb108da84bc9e0e2891d980f74c08589 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chatterjee, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:20:12 np0005545273 podman[107944]: 2025-12-04 10:20:12.646018933 +0000 UTC m=+0.130801366 container attach f492f42ad8cbc70b3c0cb3bb717addd8cb108da84bc9e0e2891d980f74c08589 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chatterjee, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:20:12 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v281: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:20:12 np0005545273 python3.9[108041]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  4 05:20:13 np0005545273 mystifying_chatterjee[107984]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:20:13 np0005545273 mystifying_chatterjee[107984]: --> All data devices are unavailable
Dec  4 05:20:13 np0005545273 systemd[1]: libpod-f492f42ad8cbc70b3c0cb3bb717addd8cb108da84bc9e0e2891d980f74c08589.scope: Deactivated successfully.
Dec  4 05:20:13 np0005545273 podman[107944]: 2025-12-04 10:20:13.146168325 +0000 UTC m=+0.630950748 container died f492f42ad8cbc70b3c0cb3bb717addd8cb108da84bc9e0e2891d980f74c08589 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:20:13 np0005545273 systemd[1]: var-lib-containers-storage-overlay-e1ccf6c73c283e343df62470c99e82bb91a8adb168d1248d823aa9fed3ef4243-merged.mount: Deactivated successfully.
Dec  4 05:20:13 np0005545273 podman[107944]: 2025-12-04 10:20:13.200872955 +0000 UTC m=+0.685655368 container remove f492f42ad8cbc70b3c0cb3bb717addd8cb108da84bc9e0e2891d980f74c08589 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec  4 05:20:13 np0005545273 systemd[1]: libpod-conmon-f492f42ad8cbc70b3c0cb3bb717addd8cb108da84bc9e0e2891d980f74c08589.scope: Deactivated successfully.
Dec  4 05:20:13 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Dec  4 05:20:13 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Dec  4 05:20:13 np0005545273 podman[108132]: 2025-12-04 10:20:13.634150183 +0000 UTC m=+0.037740173 container create c710312040c9d243a55a9fd2590acd40dca7a131c35276e23a5ff23bf95179a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:20:13 np0005545273 systemd[1]: Started libpod-conmon-c710312040c9d243a55a9fd2590acd40dca7a131c35276e23a5ff23bf95179a4.scope.
Dec  4 05:20:13 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:20:13 np0005545273 podman[108132]: 2025-12-04 10:20:13.706082387 +0000 UTC m=+0.109672397 container init c710312040c9d243a55a9fd2590acd40dca7a131c35276e23a5ff23bf95179a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec  4 05:20:13 np0005545273 podman[108132]: 2025-12-04 10:20:13.711800382 +0000 UTC m=+0.115390372 container start c710312040c9d243a55a9fd2590acd40dca7a131c35276e23a5ff23bf95179a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_nobel, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec  4 05:20:13 np0005545273 podman[108132]: 2025-12-04 10:20:13.618310147 +0000 UTC m=+0.021900157 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:20:13 np0005545273 podman[108132]: 2025-12-04 10:20:13.715192116 +0000 UTC m=+0.118782106 container attach c710312040c9d243a55a9fd2590acd40dca7a131c35276e23a5ff23bf95179a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_nobel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec  4 05:20:13 np0005545273 dreamy_nobel[108149]: 167 167
Dec  4 05:20:13 np0005545273 systemd[1]: libpod-c710312040c9d243a55a9fd2590acd40dca7a131c35276e23a5ff23bf95179a4.scope: Deactivated successfully.
Dec  4 05:20:13 np0005545273 podman[108132]: 2025-12-04 10:20:13.717655299 +0000 UTC m=+0.121245289 container died c710312040c9d243a55a9fd2590acd40dca7a131c35276e23a5ff23bf95179a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_nobel, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  4 05:20:13 np0005545273 systemd[1]: var-lib-containers-storage-overlay-98d37b212a4534b3625764d27069f54df78bacb44ff52f393f60541bb3e83ff6-merged.mount: Deactivated successfully.
Dec  4 05:20:13 np0005545273 podman[108132]: 2025-12-04 10:20:13.753948488 +0000 UTC m=+0.157538478 container remove c710312040c9d243a55a9fd2590acd40dca7a131c35276e23a5ff23bf95179a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_nobel, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:20:13 np0005545273 systemd[1]: libpod-conmon-c710312040c9d243a55a9fd2590acd40dca7a131c35276e23a5ff23bf95179a4.scope: Deactivated successfully.
Dec  4 05:20:13 np0005545273 podman[108173]: 2025-12-04 10:20:13.889770943 +0000 UTC m=+0.037877064 container create 1043ee96d89b0016126fb9642212c7251a4e8d84d99d2410a3fdab87d365069f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_lamarr, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:20:13 np0005545273 systemd[1]: Started libpod-conmon-1043ee96d89b0016126fb9642212c7251a4e8d84d99d2410a3fdab87d365069f.scope.
Dec  4 05:20:13 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:20:13 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9a7ae8743db11d682d68126fc4af91c470fa5ad7efb5352ef6809d9a2328d44/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:20:13 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9a7ae8743db11d682d68126fc4af91c470fa5ad7efb5352ef6809d9a2328d44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:20:13 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9a7ae8743db11d682d68126fc4af91c470fa5ad7efb5352ef6809d9a2328d44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:20:13 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9a7ae8743db11d682d68126fc4af91c470fa5ad7efb5352ef6809d9a2328d44/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:20:13 np0005545273 podman[108173]: 2025-12-04 10:20:13.872636471 +0000 UTC m=+0.020742612 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:20:13 np0005545273 podman[108173]: 2025-12-04 10:20:13.976151723 +0000 UTC m=+0.124257864 container init 1043ee96d89b0016126fb9642212c7251a4e8d84d99d2410a3fdab87d365069f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_lamarr, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  4 05:20:13 np0005545273 podman[108173]: 2025-12-04 10:20:13.984448434 +0000 UTC m=+0.132554565 container start 1043ee96d89b0016126fb9642212c7251a4e8d84d99d2410a3fdab87d365069f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_lamarr, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:20:13 np0005545273 podman[108173]: 2025-12-04 10:20:13.988273297 +0000 UTC m=+0.136379448 container attach 1043ee96d89b0016126fb9642212c7251a4e8d84d99d2410a3fdab87d365069f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_lamarr, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]: {
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:    "0": [
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:        {
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            "devices": [
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "/dev/loop3"
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            ],
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            "lv_name": "ceph_lv0",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            "lv_size": "21470642176",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            "name": "ceph_lv0",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            "tags": {
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.cluster_name": "ceph",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.crush_device_class": "",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.encrypted": "0",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.objectstore": "bluestore",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.osd_id": "0",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.type": "block",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.vdo": "0",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.with_tpm": "0"
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            },
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            "type": "block",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            "vg_name": "ceph_vg0"
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:        }
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:    ],
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:    "1": [
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:        {
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            "devices": [
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "/dev/loop4"
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            ],
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            "lv_name": "ceph_lv1",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            "lv_size": "21470642176",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            "name": "ceph_lv1",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            "tags": {
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.cluster_name": "ceph",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.crush_device_class": "",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.encrypted": "0",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.objectstore": "bluestore",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.osd_id": "1",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.type": "block",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.vdo": "0",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.with_tpm": "0"
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            },
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            "type": "block",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            "vg_name": "ceph_vg1"
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:        }
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:    ],
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:    "2": [
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:        {
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            "devices": [
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "/dev/loop5"
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            ],
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            "lv_name": "ceph_lv2",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            "lv_size": "21470642176",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            "name": "ceph_lv2",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            "tags": {
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.cluster_name": "ceph",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.crush_device_class": "",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.encrypted": "0",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.objectstore": "bluestore",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.osd_id": "2",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.type": "block",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.vdo": "0",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:                "ceph.with_tpm": "0"
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            },
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            "type": "block",
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:            "vg_name": "ceph_vg2"
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:        }
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]:    ]
Dec  4 05:20:14 np0005545273 pensive_lamarr[108190]: }
Dec  4 05:20:14 np0005545273 systemd[1]: libpod-1043ee96d89b0016126fb9642212c7251a4e8d84d99d2410a3fdab87d365069f.scope: Deactivated successfully.
Dec  4 05:20:14 np0005545273 podman[108173]: 2025-12-04 10:20:14.305954469 +0000 UTC m=+0.454060600 container died 1043ee96d89b0016126fb9642212c7251a4e8d84d99d2410a3fdab87d365069f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:20:14 np0005545273 systemd[1]: var-lib-containers-storage-overlay-b9a7ae8743db11d682d68126fc4af91c470fa5ad7efb5352ef6809d9a2328d44-merged.mount: Deactivated successfully.
Dec  4 05:20:14 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Dec  4 05:20:14 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Dec  4 05:20:14 np0005545273 podman[108173]: 2025-12-04 10:20:14.353014393 +0000 UTC m=+0.501120534 container remove 1043ee96d89b0016126fb9642212c7251a4e8d84d99d2410a3fdab87d365069f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec  4 05:20:14 np0005545273 systemd[1]: libpod-conmon-1043ee96d89b0016126fb9642212c7251a4e8d84d99d2410a3fdab87d365069f.scope: Deactivated successfully.
Dec  4 05:20:14 np0005545273 podman[108396]: 2025-12-04 10:20:14.791296708 +0000 UTC m=+0.051612204 container create 6125472c58092f14a863973fcfd8b30ed7ec9ecea2f25aac39b478fa030c0ec3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_beaver, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec  4 05:20:14 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v282: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:20:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:20:14 np0005545273 systemd[1]: Started libpod-conmon-6125472c58092f14a863973fcfd8b30ed7ec9ecea2f25aac39b478fa030c0ec3.scope.
Dec  4 05:20:14 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:20:14 np0005545273 podman[108396]: 2025-12-04 10:20:14.772229474 +0000 UTC m=+0.032544990 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:20:14 np0005545273 podman[108396]: 2025-12-04 10:20:14.867804553 +0000 UTC m=+0.128120069 container init 6125472c58092f14a863973fcfd8b30ed7ec9ecea2f25aac39b478fa030c0ec3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_beaver, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec  4 05:20:14 np0005545273 podman[108396]: 2025-12-04 10:20:14.876239197 +0000 UTC m=+0.136554693 container start 6125472c58092f14a863973fcfd8b30ed7ec9ecea2f25aac39b478fa030c0ec3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:20:14 np0005545273 podman[108396]: 2025-12-04 10:20:14.879829644 +0000 UTC m=+0.140145140 container attach 6125472c58092f14a863973fcfd8b30ed7ec9ecea2f25aac39b478fa030c0ec3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec  4 05:20:14 np0005545273 upbeat_beaver[108441]: 167 167
Dec  4 05:20:14 np0005545273 systemd[1]: libpod-6125472c58092f14a863973fcfd8b30ed7ec9ecea2f25aac39b478fa030c0ec3.scope: Deactivated successfully.
Dec  4 05:20:14 np0005545273 conmon[108441]: conmon 6125472c58092f14a863 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6125472c58092f14a863973fcfd8b30ed7ec9ecea2f25aac39b478fa030c0ec3.scope/container/memory.events
Dec  4 05:20:14 np0005545273 podman[108396]: 2025-12-04 10:20:14.884616109 +0000 UTC m=+0.144931625 container died 6125472c58092f14a863973fcfd8b30ed7ec9ecea2f25aac39b478fa030c0ec3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_beaver, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:20:14 np0005545273 systemd[1]: var-lib-containers-storage-overlay-fe1a2f56fbfbf9ec39cecab50548d1eebb3d066a4100692ebc4906578ae04d54-merged.mount: Deactivated successfully.
Dec  4 05:20:14 np0005545273 podman[108396]: 2025-12-04 10:20:14.932039341 +0000 UTC m=+0.192354847 container remove 6125472c58092f14a863973fcfd8b30ed7ec9ecea2f25aac39b478fa030c0ec3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:20:14 np0005545273 systemd[1]: libpod-conmon-6125472c58092f14a863973fcfd8b30ed7ec9ecea2f25aac39b478fa030c0ec3.scope: Deactivated successfully.
Dec  4 05:20:15 np0005545273 python3.9[108438]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  4 05:20:15 np0005545273 podman[108465]: 2025-12-04 10:20:15.076395102 +0000 UTC m=+0.034699257 container create 6301a5f1ccc63044a7c1ec3440804bbf4304d191e7bc992c5bcfab93400357cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_wescoff, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:20:15 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Dec  4 05:20:15 np0005545273 systemd[1]: Started libpod-conmon-6301a5f1ccc63044a7c1ec3440804bbf4304d191e7bc992c5bcfab93400357cd.scope.
Dec  4 05:20:15 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Dec  4 05:20:15 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:20:15 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18f5bfe76b97d04364e4543c43512194f6634587a626f991197d6b858fa52cd3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:20:15 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18f5bfe76b97d04364e4543c43512194f6634587a626f991197d6b858fa52cd3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:20:15 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18f5bfe76b97d04364e4543c43512194f6634587a626f991197d6b858fa52cd3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:20:15 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18f5bfe76b97d04364e4543c43512194f6634587a626f991197d6b858fa52cd3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:20:15 np0005545273 podman[108465]: 2025-12-04 10:20:15.060429994 +0000 UTC m=+0.018734169 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:20:15 np0005545273 podman[108465]: 2025-12-04 10:20:15.164066888 +0000 UTC m=+0.122371053 container init 6301a5f1ccc63044a7c1ec3440804bbf4304d191e7bc992c5bcfab93400357cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:20:15 np0005545273 podman[108465]: 2025-12-04 10:20:15.170599911 +0000 UTC m=+0.128904066 container start 6301a5f1ccc63044a7c1ec3440804bbf4304d191e7bc992c5bcfab93400357cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_wescoff, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec  4 05:20:15 np0005545273 podman[108465]: 2025-12-04 10:20:15.173664947 +0000 UTC m=+0.131969122 container attach 6301a5f1ccc63044a7c1ec3440804bbf4304d191e7bc992c5bcfab93400357cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:20:15 np0005545273 lvm[108561]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:20:15 np0005545273 lvm[108562]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:20:15 np0005545273 lvm[108562]: VG ceph_vg1 finished
Dec  4 05:20:15 np0005545273 lvm[108561]: VG ceph_vg0 finished
Dec  4 05:20:15 np0005545273 lvm[108564]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:20:15 np0005545273 lvm[108564]: VG ceph_vg2 finished
Dec  4 05:20:15 np0005545273 thirsty_wescoff[108483]: {}
Dec  4 05:20:15 np0005545273 systemd[1]: libpod-6301a5f1ccc63044a7c1ec3440804bbf4304d191e7bc992c5bcfab93400357cd.scope: Deactivated successfully.
Dec  4 05:20:15 np0005545273 systemd[1]: libpod-6301a5f1ccc63044a7c1ec3440804bbf4304d191e7bc992c5bcfab93400357cd.scope: Consumed 1.362s CPU time.
Dec  4 05:20:15 np0005545273 podman[108465]: 2025-12-04 10:20:15.964458343 +0000 UTC m=+0.922762508 container died 6301a5f1ccc63044a7c1ec3440804bbf4304d191e7bc992c5bcfab93400357cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_wescoff, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  4 05:20:16 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Dec  4 05:20:16 np0005545273 systemd[1]: var-lib-containers-storage-overlay-18f5bfe76b97d04364e4543c43512194f6634587a626f991197d6b858fa52cd3-merged.mount: Deactivated successfully.
Dec  4 05:20:16 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Dec  4 05:20:16 np0005545273 podman[108465]: 2025-12-04 10:20:16.114570889 +0000 UTC m=+1.072875044 container remove 6301a5f1ccc63044a7c1ec3440804bbf4304d191e7bc992c5bcfab93400357cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:20:16 np0005545273 systemd[1]: libpod-conmon-6301a5f1ccc63044a7c1ec3440804bbf4304d191e7bc992c5bcfab93400357cd.scope: Deactivated successfully.
Dec  4 05:20:16 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:20:16 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:20:16 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:20:16 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:20:16 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.e scrub starts
Dec  4 05:20:16 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.e scrub ok
Dec  4 05:20:16 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:20:16 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:20:16 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v283: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:20:17 np0005545273 python3.9[108756]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  4 05:20:18 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Dec  4 05:20:18 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Dec  4 05:20:18 np0005545273 python3.9[108910]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:20:18 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v284: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:20:19 np0005545273 python3.9[109062]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec  4 05:20:19 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:20:20 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.e scrub starts
Dec  4 05:20:20 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.e scrub ok
Dec  4 05:20:20 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Dec  4 05:20:20 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Dec  4 05:20:20 np0005545273 python3.9[109212]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:20:20 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Dec  4 05:20:20 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Dec  4 05:20:20 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v285: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:20:21 np0005545273 python3.9[109370]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  4 05:20:21 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Dec  4 05:20:21 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Dec  4 05:20:22 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Dec  4 05:20:22 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Dec  4 05:20:22 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v286: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:20:23 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Dec  4 05:20:23 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Dec  4 05:20:23 np0005545273 python3.9[109523]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:20:23 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Dec  4 05:20:23 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Dec  4 05:20:24 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Dec  4 05:20:24 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Dec  4 05:20:24 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v287: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:20:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:20:24 np0005545273 python3.9[109810]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  4 05:20:25 np0005545273 python3.9[109960]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:20:26 np0005545273 python3.9[110114]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  4 05:20:26 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Dec  4 05:20:26 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Dec  4 05:20:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:20:26
Dec  4 05:20:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:20:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:20:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', '.mgr', 'images', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'volumes', 'backups', 'default.rgw.control']
Dec  4 05:20:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:20:26 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v288: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:20:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:20:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:20:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:20:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:20:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:20:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:20:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:20:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:20:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:20:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:20:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:20:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:20:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:20:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:20:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:20:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:20:28 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Dec  4 05:20:28 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Dec  4 05:20:28 np0005545273 python3.9[110269]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  4 05:20:28 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Dec  4 05:20:28 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Dec  4 05:20:28 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v289: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:20:29 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Dec  4 05:20:29 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Dec  4 05:20:29 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Dec  4 05:20:29 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Dec  4 05:20:29 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Dec  4 05:20:29 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Dec  4 05:20:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:20:30 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Dec  4 05:20:30 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Dec  4 05:20:30 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Dec  4 05:20:30 np0005545273 python3.9[110422]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:20:30 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Dec  4 05:20:30 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v290: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:20:31 np0005545273 python3.9[110576]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Dec  4 05:20:32 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Dec  4 05:20:32 np0005545273 systemd[1]: session-36.scope: Deactivated successfully.
Dec  4 05:20:32 np0005545273 systemd[1]: session-36.scope: Consumed 17.989s CPU time.
Dec  4 05:20:32 np0005545273 systemd-logind[798]: Session 36 logged out. Waiting for processes to exit.
Dec  4 05:20:32 np0005545273 systemd-logind[798]: Removed session 36.
Dec  4 05:20:32 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Dec  4 05:20:32 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v291: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:20:33 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Dec  4 05:20:33 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Dec  4 05:20:34 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Dec  4 05:20:34 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Dec  4 05:20:34 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v292: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:20:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:20:35 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.c scrub starts
Dec  4 05:20:35 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.c scrub ok
Dec  4 05:20:36 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.f scrub starts
Dec  4 05:20:36 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.f scrub ok
Dec  4 05:20:36 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Dec  4 05:20:36 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Dec  4 05:20:36 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v293: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:20:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:20:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:20:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:20:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:20:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:20:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:20:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:20:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:20:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:20:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:20:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:20:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:20:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec  4 05:20:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:20:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:20:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:20:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:20:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:20:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 05:20:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:20:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:20:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:20:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 05:20:37 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Dec  4 05:20:37 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Dec  4 05:20:37 np0005545273 systemd-logind[798]: New session 37 of user zuul.
Dec  4 05:20:37 np0005545273 systemd[1]: Started Session 37 of User zuul.
Dec  4 05:20:38 np0005545273 python3.9[110756]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:20:38 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v294: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:20:39 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.a scrub starts
Dec  4 05:20:39 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.a scrub ok
Dec  4 05:20:39 np0005545273 python3.9[110910]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  4 05:20:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:20:40 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Dec  4 05:20:40 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Dec  4 05:20:40 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.c scrub starts
Dec  4 05:20:40 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.c scrub ok
Dec  4 05:20:40 np0005545273 python3.9[111103]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:20:40 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v295: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:20:41 np0005545273 systemd[1]: session-37.scope: Deactivated successfully.
Dec  4 05:20:41 np0005545273 systemd[1]: session-37.scope: Consumed 2.221s CPU time.
Dec  4 05:20:41 np0005545273 systemd-logind[798]: Session 37 logged out. Waiting for processes to exit.
Dec  4 05:20:41 np0005545273 systemd-logind[798]: Removed session 37.
Dec  4 05:20:41 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Dec  4 05:20:41 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Dec  4 05:20:42 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v296: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:20:44 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.a scrub starts
Dec  4 05:20:44 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.a scrub ok
Dec  4 05:20:44 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v297: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:20:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:20:45 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Dec  4 05:20:45 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Dec  4 05:20:45 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Dec  4 05:20:45 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Dec  4 05:20:46 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v298: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:20:48 np0005545273 systemd-logind[798]: New session 38 of user zuul.
Dec  4 05:20:48 np0005545273 systemd[1]: Started Session 38 of User zuul.
Dec  4 05:20:48 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v299: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:20:49 np0005545273 python3.9[111282]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:20:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:20:49 np0005545273 python3.9[111436]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:20:50 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Dec  4 05:20:50 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Dec  4 05:20:50 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v300: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:20:50 np0005545273 python3.9[111592]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  4 05:20:51 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Dec  4 05:20:51 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Dec  4 05:20:51 np0005545273 python3.9[111678]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  4 05:20:52 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Dec  4 05:20:52 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Dec  4 05:20:52 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v301: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:20:53 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Dec  4 05:20:53 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Dec  4 05:20:54 np0005545273 python3.9[111831]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  4 05:20:54 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v302: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:20:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:20:55 np0005545273 python3.9[112026]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:20:55 np0005545273 python3.9[112178]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:20:56 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Dec  4 05:20:56 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Dec  4 05:20:56 np0005545273 python3.9[112343]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:20:56 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v303: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:20:57 np0005545273 python3.9[112421]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:20:57 np0005545273 python3.9[112575]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:20:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:20:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:20:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:20:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:20:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:20:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:20:58 np0005545273 python3.9[112653]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:20:58 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Dec  4 05:20:58 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Dec  4 05:20:58 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v304: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:20:59 np0005545273 python3.9[112805]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:20:59 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Dec  4 05:20:59 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Dec  4 05:20:59 np0005545273 python3.9[112958]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:20:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:21:00 np0005545273 python3.9[113111]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:21:00 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Dec  4 05:21:00 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Dec  4 05:21:00 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Dec  4 05:21:00 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:21:00.423427) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  4 05:21:00 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Dec  4 05:21:00 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843660423565, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7246, "num_deletes": 251, "total_data_size": 9406135, "memory_usage": 9577264, "flush_reason": "Manual Compaction"}
Dec  4 05:21:00 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Dec  4 05:21:00 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843660501255, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7481196, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 146, "largest_seqno": 7389, "table_properties": {"data_size": 7454574, "index_size": 17291, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8261, "raw_key_size": 76305, "raw_average_key_size": 23, "raw_value_size": 7391644, "raw_average_value_size": 2250, "num_data_blocks": 759, "num_entries": 3284, "num_filter_entries": 3284, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843246, "oldest_key_time": 1764843246, "file_creation_time": 1764843660, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:21:00 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 77996 microseconds, and 15613 cpu microseconds.
Dec  4 05:21:00 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:21:00.501438) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7481196 bytes OK
Dec  4 05:21:00 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:21:00.501530) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Dec  4 05:21:00 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:21:00.503605) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Dec  4 05:21:00 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:21:00.503626) EVENT_LOG_v1 {"time_micros": 1764843660503620, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Dec  4 05:21:00 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:21:00.503666) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Dec  4 05:21:00 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9374482, prev total WAL file size 9374482, number of live WAL files 2.
Dec  4 05:21:00 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:21:00 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:21:00.506214) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Dec  4 05:21:00 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Dec  4 05:21:00 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7305KB) 13(58KB) 8(1944B)]
Dec  4 05:21:00 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843660506339, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7543100, "oldest_snapshot_seqno": -1}
Dec  4 05:21:00 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3110 keys, 7495942 bytes, temperature: kUnknown
Dec  4 05:21:00 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843660626038, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7495942, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7469740, "index_size": 17324, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7813, "raw_key_size": 74745, "raw_average_key_size": 24, "raw_value_size": 7408164, "raw_average_value_size": 2382, "num_data_blocks": 762, "num_entries": 3110, "num_filter_entries": 3110, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 1764843660, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:21:00 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  4 05:21:00 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:21:00.626500) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7495942 bytes
Dec  4 05:21:00 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:21:00.628615) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 62.9 rd, 62.5 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.2, 0.0 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3399, records dropped: 289 output_compression: NoCompression
Dec  4 05:21:00 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:21:00.628642) EVENT_LOG_v1 {"time_micros": 1764843660628629, "job": 4, "event": "compaction_finished", "compaction_time_micros": 119845, "compaction_time_cpu_micros": 17932, "output_level": 6, "num_output_files": 1, "total_output_size": 7495942, "num_input_records": 3399, "num_output_records": 3110, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  4 05:21:00 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:21:00 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843660630652, "job": 4, "event": "table_file_deletion", "file_number": 19}
Dec  4 05:21:00 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:21:00 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843660630725, "job": 4, "event": "table_file_deletion", "file_number": 13}
Dec  4 05:21:00 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:21:00 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843660630817, "job": 4, "event": "table_file_deletion", "file_number": 8}
Dec  4 05:21:00 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:21:00.506019) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:21:00 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v305: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:21:00 np0005545273 python3.9[113264]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:21:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Dec  4 05:21:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Dec  4 05:21:01 np0005545273 python3.9[113416]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  4 05:21:02 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v306: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:21:03 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Dec  4 05:21:03 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Dec  4 05:21:03 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Dec  4 05:21:03 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Dec  4 05:21:03 np0005545273 python3.9[113572]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:21:04 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Dec  4 05:21:04 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Dec  4 05:21:04 np0005545273 python3.9[113727]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:21:04 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Dec  4 05:21:04 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Dec  4 05:21:04 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v307: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:21:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:21:05 np0005545273 python3.9[113879]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:21:05 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Dec  4 05:21:05 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Dec  4 05:21:06 np0005545273 python3.9[114031]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:21:06 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v308: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:21:07 np0005545273 python3.9[114184]: ansible-service_facts Invoked
Dec  4 05:21:07 np0005545273 network[114201]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  4 05:21:07 np0005545273 network[114202]: 'network-scripts' will be removed from distribution in near future.
Dec  4 05:21:07 np0005545273 network[114203]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  4 05:21:08 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.f scrub starts
Dec  4 05:21:08 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.f scrub ok
Dec  4 05:21:08 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v309: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:21:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:21:10 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.b scrub starts
Dec  4 05:21:10 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.b scrub ok
Dec  4 05:21:10 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Dec  4 05:21:10 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Dec  4 05:21:10 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v310: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:21:11 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Dec  4 05:21:11 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Dec  4 05:21:11 np0005545273 python3.9[114655]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  4 05:21:12 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.b scrub starts
Dec  4 05:21:12 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.b scrub ok
Dec  4 05:21:12 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v311: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:21:13 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Dec  4 05:21:13 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Dec  4 05:21:14 np0005545273 python3.9[114808]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec  4 05:21:14 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.e scrub starts
Dec  4 05:21:14 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.e scrub ok
Dec  4 05:21:14 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v312: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:21:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:21:15 np0005545273 python3.9[114962]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:21:15 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.f scrub starts
Dec  4 05:21:15 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.f scrub ok
Dec  4 05:21:15 np0005545273 python3.9[115040]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:21:16 np0005545273 python3.9[115242]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:21:16 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v313: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:21:16 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:21:16 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:21:16 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:21:16 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:21:16 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:21:16 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:21:16 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:21:16 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:21:16 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:21:16 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:21:16 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:21:16 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:21:16 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:21:16 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:21:16 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:21:17 np0005545273 python3.9[115381]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:21:17 np0005545273 podman[115427]: 2025-12-04 10:21:17.314773502 +0000 UTC m=+0.058278685 container create 4b50fa4140bef65894050d8409ad1d87723c7e8550b7bc375474ecbeb82450b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_mclaren, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:21:17 np0005545273 systemd[1]: Started libpod-conmon-4b50fa4140bef65894050d8409ad1d87723c7e8550b7bc375474ecbeb82450b5.scope.
Dec  4 05:21:17 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:21:17 np0005545273 podman[115427]: 2025-12-04 10:21:17.292834813 +0000 UTC m=+0.036339986 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:21:17 np0005545273 podman[115427]: 2025-12-04 10:21:17.399146783 +0000 UTC m=+0.142651986 container init 4b50fa4140bef65894050d8409ad1d87723c7e8550b7bc375474ecbeb82450b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_mclaren, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:21:17 np0005545273 podman[115427]: 2025-12-04 10:21:17.40763197 +0000 UTC m=+0.151137143 container start 4b50fa4140bef65894050d8409ad1d87723c7e8550b7bc375474ecbeb82450b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_mclaren, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  4 05:21:17 np0005545273 podman[115427]: 2025-12-04 10:21:17.411688534 +0000 UTC m=+0.155193697 container attach 4b50fa4140bef65894050d8409ad1d87723c7e8550b7bc375474ecbeb82450b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_mclaren, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:21:17 np0005545273 priceless_mclaren[115456]: 167 167
Dec  4 05:21:17 np0005545273 systemd[1]: libpod-4b50fa4140bef65894050d8409ad1d87723c7e8550b7bc375474ecbeb82450b5.scope: Deactivated successfully.
Dec  4 05:21:17 np0005545273 podman[115427]: 2025-12-04 10:21:17.415448531 +0000 UTC m=+0.158953694 container died 4b50fa4140bef65894050d8409ad1d87723c7e8550b7bc375474ecbeb82450b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_mclaren, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:21:17 np0005545273 systemd[1]: var-lib-containers-storage-overlay-05d2ac38f4a267929b91a0fc2a7c22c2a720e993cefc3f96e56bfd3a5138cc0e-merged.mount: Deactivated successfully.
Dec  4 05:21:17 np0005545273 podman[115427]: 2025-12-04 10:21:17.462844843 +0000 UTC m=+0.206350006 container remove 4b50fa4140bef65894050d8409ad1d87723c7e8550b7bc375474ecbeb82450b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_mclaren, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:21:17 np0005545273 systemd[1]: libpod-conmon-4b50fa4140bef65894050d8409ad1d87723c7e8550b7bc375474ecbeb82450b5.scope: Deactivated successfully.
Dec  4 05:21:17 np0005545273 podman[115479]: 2025-12-04 10:21:17.609748246 +0000 UTC m=+0.048082818 container create 7761ed459ef5a8d189d7a338d27a580d7b89daefd607c1af8caa453382a26054 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:21:17 np0005545273 systemd[1]: Started libpod-conmon-7761ed459ef5a8d189d7a338d27a580d7b89daefd607c1af8caa453382a26054.scope.
Dec  4 05:21:17 np0005545273 podman[115479]: 2025-12-04 10:21:17.585221467 +0000 UTC m=+0.023556059 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:21:17 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:21:17 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb3b56728ad73e0c759db65b3d547b283870a0a8e93ef7fa3e52c560f4064d6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:21:17 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb3b56728ad73e0c759db65b3d547b283870a0a8e93ef7fa3e52c560f4064d6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:21:17 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb3b56728ad73e0c759db65b3d547b283870a0a8e93ef7fa3e52c560f4064d6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:21:17 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb3b56728ad73e0c759db65b3d547b283870a0a8e93ef7fa3e52c560f4064d6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:21:17 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb3b56728ad73e0c759db65b3d547b283870a0a8e93ef7fa3e52c560f4064d6e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:21:17 np0005545273 podman[115479]: 2025-12-04 10:21:17.71751879 +0000 UTC m=+0.155853402 container init 7761ed459ef5a8d189d7a338d27a580d7b89daefd607c1af8caa453382a26054 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_ellis, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Dec  4 05:21:17 np0005545273 podman[115479]: 2025-12-04 10:21:17.723603241 +0000 UTC m=+0.161937833 container start 7761ed459ef5a8d189d7a338d27a580d7b89daefd607c1af8caa453382a26054 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_ellis, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:21:17 np0005545273 podman[115479]: 2025-12-04 10:21:17.728013195 +0000 UTC m=+0.166347787 container attach 7761ed459ef5a8d189d7a338d27a580d7b89daefd607c1af8caa453382a26054 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  4 05:21:18 np0005545273 eager_ellis[115519]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:21:18 np0005545273 eager_ellis[115519]: --> All data devices are unavailable
Dec  4 05:21:18 np0005545273 systemd[1]: libpod-7761ed459ef5a8d189d7a338d27a580d7b89daefd607c1af8caa453382a26054.scope: Deactivated successfully.
Dec  4 05:21:18 np0005545273 podman[115479]: 2025-12-04 10:21:18.259006273 +0000 UTC m=+0.697340855 container died 7761ed459ef5a8d189d7a338d27a580d7b89daefd607c1af8caa453382a26054 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_ellis, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec  4 05:21:18 np0005545273 python3.9[115635]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:21:18 np0005545273 systemd[1]: var-lib-containers-storage-overlay-bb3b56728ad73e0c759db65b3d547b283870a0a8e93ef7fa3e52c560f4064d6e-merged.mount: Deactivated successfully.
Dec  4 05:21:18 np0005545273 podman[115479]: 2025-12-04 10:21:18.304948671 +0000 UTC m=+0.743283223 container remove 7761ed459ef5a8d189d7a338d27a580d7b89daefd607c1af8caa453382a26054 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  4 05:21:18 np0005545273 systemd[1]: libpod-conmon-7761ed459ef5a8d189d7a338d27a580d7b89daefd607c1af8caa453382a26054.scope: Deactivated successfully.
Dec  4 05:21:18 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.e scrub starts
Dec  4 05:21:18 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.e scrub ok
Dec  4 05:21:18 np0005545273 podman[115744]: 2025-12-04 10:21:18.725317258 +0000 UTC m=+0.041752761 container create 2211b16ff3e6e268705a351221ec86c3d112653dc3ce880be66d5b9f041f2a7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_cray, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec  4 05:21:18 np0005545273 systemd[1]: Started libpod-conmon-2211b16ff3e6e268705a351221ec86c3d112653dc3ce880be66d5b9f041f2a7c.scope.
Dec  4 05:21:18 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:21:18 np0005545273 podman[115744]: 2025-12-04 10:21:18.705457777 +0000 UTC m=+0.021893330 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:21:18 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v314: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:21:19 np0005545273 python3.9[115891]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  4 05:21:19 np0005545273 podman[115744]: 2025-12-04 10:21:19.6644148 +0000 UTC m=+0.980850333 container init 2211b16ff3e6e268705a351221ec86c3d112653dc3ce880be66d5b9f041f2a7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  4 05:21:19 np0005545273 podman[115744]: 2025-12-04 10:21:19.676878598 +0000 UTC m=+0.993314101 container start 2211b16ff3e6e268705a351221ec86c3d112653dc3ce880be66d5b9f041f2a7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:21:19 np0005545273 podman[115744]: 2025-12-04 10:21:19.682984681 +0000 UTC m=+0.999420204 container attach 2211b16ff3e6e268705a351221ec86c3d112653dc3ce880be66d5b9f041f2a7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  4 05:21:19 np0005545273 beautiful_cray[115761]: 167 167
Dec  4 05:21:19 np0005545273 systemd[1]: libpod-2211b16ff3e6e268705a351221ec86c3d112653dc3ce880be66d5b9f041f2a7c.scope: Deactivated successfully.
Dec  4 05:21:19 np0005545273 podman[115744]: 2025-12-04 10:21:19.685458179 +0000 UTC m=+1.001893712 container died 2211b16ff3e6e268705a351221ec86c3d112653dc3ce880be66d5b9f041f2a7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_cray, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:21:19 np0005545273 systemd[1]: var-lib-containers-storage-overlay-f3b6c9eac27a5de9d5912741c69e630705b8ab6b9136647247ff63c028db7875-merged.mount: Deactivated successfully.
Dec  4 05:21:19 np0005545273 podman[115744]: 2025-12-04 10:21:19.722281434 +0000 UTC m=+1.038716937 container remove 2211b16ff3e6e268705a351221ec86c3d112653dc3ce880be66d5b9f041f2a7c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Dec  4 05:21:19 np0005545273 systemd[1]: libpod-conmon-2211b16ff3e6e268705a351221ec86c3d112653dc3ce880be66d5b9f041f2a7c.scope: Deactivated successfully.
Dec  4 05:21:19 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:21:19 np0005545273 podman[115920]: 2025-12-04 10:21:19.870017826 +0000 UTC m=+0.048088988 container create 40cc0981ad76e32724742ce24ea5389f91286a41918798b92b24302a677b1a05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_napier, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:21:19 np0005545273 systemd[1]: Started libpod-conmon-40cc0981ad76e32724742ce24ea5389f91286a41918798b92b24302a677b1a05.scope.
Dec  4 05:21:19 np0005545273 podman[115920]: 2025-12-04 10:21:19.846011619 +0000 UTC m=+0.024082841 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:21:19 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:21:19 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/084e77081ef74721b855973091bd09e831eb91b038dc83ca1e9bb881ae86660c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:21:19 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/084e77081ef74721b855973091bd09e831eb91b038dc83ca1e9bb881ae86660c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:21:19 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/084e77081ef74721b855973091bd09e831eb91b038dc83ca1e9bb881ae86660c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:21:19 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/084e77081ef74721b855973091bd09e831eb91b038dc83ca1e9bb881ae86660c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:21:19 np0005545273 podman[115920]: 2025-12-04 10:21:19.979125842 +0000 UTC m=+0.157197024 container init 40cc0981ad76e32724742ce24ea5389f91286a41918798b92b24302a677b1a05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_napier, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:21:19 np0005545273 podman[115920]: 2025-12-04 10:21:19.986463563 +0000 UTC m=+0.164534725 container start 40cc0981ad76e32724742ce24ea5389f91286a41918798b92b24302a677b1a05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_napier, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:21:19 np0005545273 podman[115920]: 2025-12-04 10:21:19.990271481 +0000 UTC m=+0.168342643 container attach 40cc0981ad76e32724742ce24ea5389f91286a41918798b92b24302a677b1a05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_napier, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:21:20 np0005545273 reverent_napier[115936]: {
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:    "0": [
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:        {
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            "devices": [
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "/dev/loop3"
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            ],
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            "lv_name": "ceph_lv0",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            "lv_size": "21470642176",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            "name": "ceph_lv0",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            "tags": {
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.cluster_name": "ceph",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.crush_device_class": "",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.encrypted": "0",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.objectstore": "bluestore",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.osd_id": "0",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.type": "block",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.vdo": "0",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.with_tpm": "0"
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            },
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            "type": "block",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            "vg_name": "ceph_vg0"
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:        }
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:    ],
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:    "1": [
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:        {
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            "devices": [
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "/dev/loop4"
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            ],
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            "lv_name": "ceph_lv1",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            "lv_size": "21470642176",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            "name": "ceph_lv1",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            "tags": {
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.cluster_name": "ceph",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.crush_device_class": "",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.encrypted": "0",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.objectstore": "bluestore",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.osd_id": "1",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.type": "block",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.vdo": "0",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.with_tpm": "0"
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            },
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            "type": "block",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            "vg_name": "ceph_vg1"
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:        }
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:    ],
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:    "2": [
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:        {
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            "devices": [
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "/dev/loop5"
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            ],
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            "lv_name": "ceph_lv2",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            "lv_size": "21470642176",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            "name": "ceph_lv2",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            "tags": {
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.cluster_name": "ceph",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.crush_device_class": "",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.encrypted": "0",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.objectstore": "bluestore",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.osd_id": "2",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.type": "block",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.vdo": "0",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:                "ceph.with_tpm": "0"
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            },
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            "type": "block",
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:            "vg_name": "ceph_vg2"
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:        }
Dec  4 05:21:20 np0005545273 reverent_napier[115936]:    ]
Dec  4 05:21:20 np0005545273 reverent_napier[115936]: }
Dec  4 05:21:20 np0005545273 systemd[1]: libpod-40cc0981ad76e32724742ce24ea5389f91286a41918798b92b24302a677b1a05.scope: Deactivated successfully.
Dec  4 05:21:20 np0005545273 podman[115920]: 2025-12-04 10:21:20.330472906 +0000 UTC m=+0.508544098 container died 40cc0981ad76e32724742ce24ea5389f91286a41918798b92b24302a677b1a05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_napier, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:21:20 np0005545273 systemd[1]: var-lib-containers-storage-overlay-084e77081ef74721b855973091bd09e831eb91b038dc83ca1e9bb881ae86660c-merged.mount: Deactivated successfully.
Dec  4 05:21:20 np0005545273 podman[115920]: 2025-12-04 10:21:20.386489247 +0000 UTC m=+0.564560419 container remove 40cc0981ad76e32724742ce24ea5389f91286a41918798b92b24302a677b1a05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_napier, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:21:20 np0005545273 systemd[1]: libpod-conmon-40cc0981ad76e32724742ce24ea5389f91286a41918798b92b24302a677b1a05.scope: Deactivated successfully.
Dec  4 05:21:20 np0005545273 python3.9[116020]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:21:20 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v315: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:21:20 np0005545273 podman[116122]: 2025-12-04 10:21:20.852082276 +0000 UTC m=+0.040593544 container create 2e0b3ad77167e0089186c145d968b3095c9d08f2e1b724c11fcd02a5fd4a7986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_jones, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec  4 05:21:20 np0005545273 systemd[1]: Started libpod-conmon-2e0b3ad77167e0089186c145d968b3095c9d08f2e1b724c11fcd02a5fd4a7986.scope.
Dec  4 05:21:20 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:21:20 np0005545273 podman[116122]: 2025-12-04 10:21:20.919788459 +0000 UTC m=+0.108299747 container init 2e0b3ad77167e0089186c145d968b3095c9d08f2e1b724c11fcd02a5fd4a7986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_jones, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:21:20 np0005545273 podman[116122]: 2025-12-04 10:21:20.927045738 +0000 UTC m=+0.115557006 container start 2e0b3ad77167e0089186c145d968b3095c9d08f2e1b724c11fcd02a5fd4a7986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_jones, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:21:20 np0005545273 podman[116122]: 2025-12-04 10:21:20.930453237 +0000 UTC m=+0.118964515 container attach 2e0b3ad77167e0089186c145d968b3095c9d08f2e1b724c11fcd02a5fd4a7986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_jones, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:21:20 np0005545273 podman[116122]: 2025-12-04 10:21:20.836043953 +0000 UTC m=+0.024555241 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:21:20 np0005545273 vigilant_jones[116138]: 167 167
Dec  4 05:21:20 np0005545273 systemd[1]: libpod-2e0b3ad77167e0089186c145d968b3095c9d08f2e1b724c11fcd02a5fd4a7986.scope: Deactivated successfully.
Dec  4 05:21:20 np0005545273 podman[116122]: 2025-12-04 10:21:20.934221164 +0000 UTC m=+0.122732432 container died 2e0b3ad77167e0089186c145d968b3095c9d08f2e1b724c11fcd02a5fd4a7986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_jones, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  4 05:21:20 np0005545273 systemd[1]: var-lib-containers-storage-overlay-25166b00080af2e0cb5a66fe058f393799c356dc44b84bcb1d639f7fe177b03e-merged.mount: Deactivated successfully.
Dec  4 05:21:20 np0005545273 podman[116122]: 2025-12-04 10:21:20.969337391 +0000 UTC m=+0.157848659 container remove 2e0b3ad77167e0089186c145d968b3095c9d08f2e1b724c11fcd02a5fd4a7986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_jones, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec  4 05:21:20 np0005545273 systemd[1]: libpod-conmon-2e0b3ad77167e0089186c145d968b3095c9d08f2e1b724c11fcd02a5fd4a7986.scope: Deactivated successfully.
Dec  4 05:21:21 np0005545273 podman[116163]: 2025-12-04 10:21:21.180306583 +0000 UTC m=+0.066598368 container create bcd64076a28c32967a97f78283cc2717b9df8e4d48b421160989723acafa31c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_faraday, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec  4 05:21:21 np0005545273 systemd[1]: Started libpod-conmon-bcd64076a28c32967a97f78283cc2717b9df8e4d48b421160989723acafa31c4.scope.
Dec  4 05:21:21 np0005545273 podman[116163]: 2025-12-04 10:21:21.143876616 +0000 UTC m=+0.030168461 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:21:21 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:21:21 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7171314a4e21ae02195851686f97f8b2f355bff4bb83ebfe3e3eb841f797c0e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:21:21 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7171314a4e21ae02195851686f97f8b2f355bff4bb83ebfe3e3eb841f797c0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:21:21 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7171314a4e21ae02195851686f97f8b2f355bff4bb83ebfe3e3eb841f797c0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:21:21 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7171314a4e21ae02195851686f97f8b2f355bff4bb83ebfe3e3eb841f797c0e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:21:21 np0005545273 podman[116163]: 2025-12-04 10:21:21.280481521 +0000 UTC m=+0.166773276 container init bcd64076a28c32967a97f78283cc2717b9df8e4d48b421160989723acafa31c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:21:21 np0005545273 podman[116163]: 2025-12-04 10:21:21.297352563 +0000 UTC m=+0.183644318 container start bcd64076a28c32967a97f78283cc2717b9df8e4d48b421160989723acafa31c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_faraday, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:21:21 np0005545273 podman[116163]: 2025-12-04 10:21:21.301351006 +0000 UTC m=+0.187642761 container attach bcd64076a28c32967a97f78283cc2717b9df8e4d48b421160989723acafa31c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_faraday, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:21:21 np0005545273 systemd[1]: session-38.scope: Deactivated successfully.
Dec  4 05:21:21 np0005545273 systemd[1]: session-38.scope: Consumed 23.189s CPU time.
Dec  4 05:21:21 np0005545273 systemd-logind[798]: Session 38 logged out. Waiting for processes to exit.
Dec  4 05:21:21 np0005545273 systemd-logind[798]: Removed session 38.
Dec  4 05:21:21 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Dec  4 05:21:21 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Dec  4 05:21:21 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.d scrub starts
Dec  4 05:21:21 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.d scrub ok
Dec  4 05:21:22 np0005545273 lvm[116258]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:21:22 np0005545273 lvm[116259]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:21:22 np0005545273 lvm[116259]: VG ceph_vg1 finished
Dec  4 05:21:22 np0005545273 lvm[116258]: VG ceph_vg0 finished
Dec  4 05:21:22 np0005545273 lvm[116261]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:21:22 np0005545273 lvm[116261]: VG ceph_vg2 finished
Dec  4 05:21:22 np0005545273 kind_faraday[116180]: {}
Dec  4 05:21:22 np0005545273 systemd[1]: libpod-bcd64076a28c32967a97f78283cc2717b9df8e4d48b421160989723acafa31c4.scope: Deactivated successfully.
Dec  4 05:21:22 np0005545273 systemd[1]: libpod-bcd64076a28c32967a97f78283cc2717b9df8e4d48b421160989723acafa31c4.scope: Consumed 1.499s CPU time.
Dec  4 05:21:22 np0005545273 podman[116163]: 2025-12-04 10:21:22.200961629 +0000 UTC m=+1.087253404 container died bcd64076a28c32967a97f78283cc2717b9df8e4d48b421160989723acafa31c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec  4 05:21:22 np0005545273 systemd[1]: var-lib-containers-storage-overlay-d7171314a4e21ae02195851686f97f8b2f355bff4bb83ebfe3e3eb841f797c0e-merged.mount: Deactivated successfully.
Dec  4 05:21:22 np0005545273 podman[116163]: 2025-12-04 10:21:22.248889522 +0000 UTC m=+1.135181277 container remove bcd64076a28c32967a97f78283cc2717b9df8e4d48b421160989723acafa31c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_faraday, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  4 05:21:22 np0005545273 systemd[1]: libpod-conmon-bcd64076a28c32967a97f78283cc2717b9df8e4d48b421160989723acafa31c4.scope: Deactivated successfully.
Dec  4 05:21:22 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:21:22 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:21:22 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:21:22 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Dec  4 05:21:22 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:21:22 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Dec  4 05:21:22 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:21:22 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:21:22 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v316: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:21:24 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Dec  4 05:21:24 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Dec  4 05:21:24 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v317: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:21:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:21:25 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Dec  4 05:21:25 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Dec  4 05:21:26 np0005545273 systemd-logind[798]: New session 39 of user zuul.
Dec  4 05:21:26 np0005545273 systemd[1]: Started Session 39 of User zuul.
Dec  4 05:21:26 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Dec  4 05:21:26 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Dec  4 05:21:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:21:26
Dec  4 05:21:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:21:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:21:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['volumes', '.rgw.root', 'backups', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', '.mgr', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta']
Dec  4 05:21:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:21:26 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v318: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:21:27 np0005545273 python3.9[116454]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:21:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:21:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:21:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:21:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:21:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:21:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:21:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:21:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:21:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:21:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:21:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:21:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:21:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:21:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:21:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:21:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:21:28 np0005545273 python3.9[116606]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:21:28 np0005545273 python3.9[116684]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:21:28 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v319: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:21:28 np0005545273 systemd[1]: session-39.scope: Deactivated successfully.
Dec  4 05:21:28 np0005545273 systemd[1]: session-39.scope: Consumed 1.497s CPU time.
Dec  4 05:21:28 np0005545273 systemd-logind[798]: Session 39 logged out. Waiting for processes to exit.
Dec  4 05:21:28 np0005545273 systemd-logind[798]: Removed session 39.
Dec  4 05:21:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:21:30 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Dec  4 05:21:30 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Dec  4 05:21:30 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Dec  4 05:21:30 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Dec  4 05:21:30 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v320: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:21:31 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Dec  4 05:21:31 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Dec  4 05:21:32 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Dec  4 05:21:32 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Dec  4 05:21:32 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v321: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:21:33 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Dec  4 05:21:33 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Dec  4 05:21:34 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Dec  4 05:21:34 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Dec  4 05:21:34 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Dec  4 05:21:34 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Dec  4 05:21:34 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v322: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:21:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:21:34 np0005545273 systemd-logind[798]: New session 40 of user zuul.
Dec  4 05:21:34 np0005545273 systemd[1]: Started Session 40 of User zuul.
Dec  4 05:21:35 np0005545273 python3.9[116864]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:21:36 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Dec  4 05:21:36 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Dec  4 05:21:36 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Dec  4 05:21:36 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Dec  4 05:21:36 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v323: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:21:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:21:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:21:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:21:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:21:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:21:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:21:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:21:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:21:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:21:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:21:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:21:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:21:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec  4 05:21:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:21:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:21:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:21:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:21:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:21:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 05:21:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:21:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:21:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:21:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 05:21:37 np0005545273 python3.9[117020]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:21:37 np0005545273 python3.9[117195]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:21:38 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Dec  4 05:21:38 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Dec  4 05:21:38 np0005545273 python3.9[117273]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.uzm0uy36 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:21:38 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v324: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:21:39 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Dec  4 05:21:39 np0005545273 python3.9[117425]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:21:39 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Dec  4 05:21:39 np0005545273 python3.9[117503]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.jja9nxld recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:21:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:21:40 np0005545273 python3.9[117655]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:21:40 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Dec  4 05:21:40 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Dec  4 05:21:40 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v325: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:21:41 np0005545273 python3.9[117807]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:21:41 np0005545273 python3.9[117885]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:21:42 np0005545273 python3.9[118037]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:21:42 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.a scrub starts
Dec  4 05:21:42 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.a scrub ok
Dec  4 05:21:42 np0005545273 python3.9[118115]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:21:42 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v326: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:21:43 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Dec  4 05:21:43 np0005545273 python3.9[118267]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:21:43 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Dec  4 05:21:43 np0005545273 python3.9[118419]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:21:44 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Dec  4 05:21:44 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Dec  4 05:21:44 np0005545273 python3.9[118497]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:21:44 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v327: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:21:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:21:44 np0005545273 python3.9[118649]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:21:45 np0005545273 python3.9[118727]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:21:46 np0005545273 python3.9[118879]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:21:46 np0005545273 systemd[1]: Reloading.
Dec  4 05:21:46 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:21:46 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:21:46 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v328: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:21:47 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Dec  4 05:21:47 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Dec  4 05:21:47 np0005545273 python3.9[119068]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:21:47 np0005545273 python3.9[119146]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:21:48 np0005545273 python3.9[119298]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:21:48 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v329: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:21:48 np0005545273 python3.9[119376]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:21:49 np0005545273 python3.9[119530]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:21:49 np0005545273 systemd[1]: Reloading.
Dec  4 05:21:49 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:21:49 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:21:49 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Dec  4 05:21:49 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Dec  4 05:21:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:21:49 np0005545273 systemd[1]: Starting Create netns directory...
Dec  4 05:21:49 np0005545273 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  4 05:21:49 np0005545273 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  4 05:21:49 np0005545273 systemd[1]: Finished Create netns directory.
Dec  4 05:21:50 np0005545273 python3.9[119722]: ansible-ansible.builtin.service_facts Invoked
Dec  4 05:21:50 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v330: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:21:50 np0005545273 network[119739]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  4 05:21:50 np0005545273 network[119740]: 'network-scripts' will be removed from distribution in near future.
Dec  4 05:21:50 np0005545273 network[119741]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  4 05:21:51 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Dec  4 05:21:51 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Dec  4 05:21:52 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Dec  4 05:21:52 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Dec  4 05:21:52 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.d scrub starts
Dec  4 05:21:52 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.d scrub ok
Dec  4 05:21:52 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v331: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:21:54 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Dec  4 05:21:54 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Dec  4 05:21:54 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v332: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:21:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:21:55 np0005545273 python3.9[120005]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:21:55 np0005545273 python3.9[120083]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:21:56 np0005545273 python3.9[120235]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:21:56 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Dec  4 05:21:56 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Dec  4 05:21:56 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v333: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:21:57 np0005545273 python3.9[120387]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:21:57 np0005545273 python3.9[120465]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:21:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:21:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:21:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:21:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:21:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:21:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:21:58 np0005545273 python3.9[120619]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  4 05:21:58 np0005545273 systemd[1]: Starting Time & Date Service...
Dec  4 05:21:58 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Dec  4 05:21:58 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Dec  4 05:21:58 np0005545273 systemd[1]: Started Time & Date Service.
Dec  4 05:21:58 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v334: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:21:59 np0005545273 python3.9[120775]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:21:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:22:00 np0005545273 python3.9[120927]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:22:00 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.b scrub starts
Dec  4 05:22:00 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.b scrub ok
Dec  4 05:22:00 np0005545273 python3.9[121005]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:22:00 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v335: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:22:01 np0005545273 python3.9[121157]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:22:01 np0005545273 python3.9[121235]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.6dm2prbk recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:22:02 np0005545273 python3.9[121387]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:22:02 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v336: 321 pgs: 321 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:22:02 np0005545273 python3.9[121465]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:22:03 np0005545273 python3.9[121617]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:22:04 np0005545273 python3[121770]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  4 05:22:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:22:04 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v337: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:22:05 np0005545273 python3.9[121922]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:22:05 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Dec  4 05:22:05 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Dec  4 05:22:06 np0005545273 python3.9[122000]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:22:06 np0005545273 python3.9[122152]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:22:06 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v338: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:22:07 np0005545273 python3.9[122230]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:22:07 np0005545273 python3.9[122382]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:22:08 np0005545273 python3.9[122460]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:22:08 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v339: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:22:08 np0005545273 python3.9[122612]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:22:09 np0005545273 python3.9[122690]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:22:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:22:10 np0005545273 python3.9[122842]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:22:10 np0005545273 python3.9[122920]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:22:10 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v340: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:22:11 np0005545273 python3.9[123072]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:22:12 np0005545273 python3.9[123227]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:22:12 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v341: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:22:12 np0005545273 python3.9[123379]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:22:13 np0005545273 python3.9[123531]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:22:14 np0005545273 python3.9[123683]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  4 05:22:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:22:14 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v342: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:22:15 np0005545273 python3.9[123835]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  4 05:22:15 np0005545273 systemd[1]: session-40.scope: Deactivated successfully.
Dec  4 05:22:15 np0005545273 systemd[1]: session-40.scope: Consumed 29.576s CPU time.
Dec  4 05:22:15 np0005545273 systemd-logind[798]: Session 40 logged out. Waiting for processes to exit.
Dec  4 05:22:15 np0005545273 systemd-logind[798]: Removed session 40.
Dec  4 05:22:16 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v343: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:22:17 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Dec  4 05:22:17 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Dec  4 05:22:18 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v344: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:22:19 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Dec  4 05:22:19 np0005545273 ceph-osd[86021]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Dec  4 05:22:19 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:22:20 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v345: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:22:20 np0005545273 systemd-logind[798]: New session 41 of user zuul.
Dec  4 05:22:20 np0005545273 systemd[1]: Started Session 41 of User zuul.
Dec  4 05:22:21 np0005545273 python3.9[124015]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec  4 05:22:22 np0005545273 python3.9[124167]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:22:22 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v346: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:22:23 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:22:23 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:22:23 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:22:23 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:22:23 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:22:23 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:22:23 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:22:23 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:22:23 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:22:23 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:22:23 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:22:23 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:22:23 np0005545273 python3.9[124403]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Dec  4 05:22:23 np0005545273 podman[124534]: 2025-12-04 10:22:23.444551028 +0000 UTC m=+0.039468045 container create 4808c8b8abf14510f5f0b9854cd351605f947822d968efbcaa1453c19c8eb9d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_mirzakhani, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  4 05:22:23 np0005545273 systemd[1]: Started libpod-conmon-4808c8b8abf14510f5f0b9854cd351605f947822d968efbcaa1453c19c8eb9d0.scope.
Dec  4 05:22:23 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:22:23 np0005545273 podman[124534]: 2025-12-04 10:22:23.519635782 +0000 UTC m=+0.114552829 container init 4808c8b8abf14510f5f0b9854cd351605f947822d968efbcaa1453c19c8eb9d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_mirzakhani, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec  4 05:22:23 np0005545273 podman[124534]: 2025-12-04 10:22:23.424927839 +0000 UTC m=+0.019844886 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:22:23 np0005545273 podman[124534]: 2025-12-04 10:22:23.526321049 +0000 UTC m=+0.121238066 container start 4808c8b8abf14510f5f0b9854cd351605f947822d968efbcaa1453c19c8eb9d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_mirzakhani, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec  4 05:22:23 np0005545273 podman[124534]: 2025-12-04 10:22:23.529210651 +0000 UTC m=+0.124127668 container attach 4808c8b8abf14510f5f0b9854cd351605f947822d968efbcaa1453c19c8eb9d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_mirzakhani, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  4 05:22:23 np0005545273 condescending_mirzakhani[124579]: 167 167
Dec  4 05:22:23 np0005545273 systemd[1]: libpod-4808c8b8abf14510f5f0b9854cd351605f947822d968efbcaa1453c19c8eb9d0.scope: Deactivated successfully.
Dec  4 05:22:23 np0005545273 podman[124534]: 2025-12-04 10:22:23.532891962 +0000 UTC m=+0.127808979 container died 4808c8b8abf14510f5f0b9854cd351605f947822d968efbcaa1453c19c8eb9d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_mirzakhani, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:22:23 np0005545273 systemd[1]: var-lib-containers-storage-overlay-ed47030be6afc3810741e9198233cfebcebc612601eb9ae88719a5aec5613381-merged.mount: Deactivated successfully.
Dec  4 05:22:23 np0005545273 podman[124534]: 2025-12-04 10:22:23.588663014 +0000 UTC m=+0.183580031 container remove 4808c8b8abf14510f5f0b9854cd351605f947822d968efbcaa1453c19c8eb9d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:22:23 np0005545273 systemd[1]: libpod-conmon-4808c8b8abf14510f5f0b9854cd351605f947822d968efbcaa1453c19c8eb9d0.scope: Deactivated successfully.
Dec  4 05:22:23 np0005545273 podman[124658]: 2025-12-04 10:22:23.778291616 +0000 UTC m=+0.072109501 container create aec2c9dc454526f6b22bfcb934aed31033e07844ac3c89adef3fe907f5369978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_galileo, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:22:23 np0005545273 python3.9[124652]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.h5jwro0g follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:22:23 np0005545273 systemd[1]: Started libpod-conmon-aec2c9dc454526f6b22bfcb934aed31033e07844ac3c89adef3fe907f5369978.scope.
Dec  4 05:22:23 np0005545273 podman[124658]: 2025-12-04 10:22:23.730896993 +0000 UTC m=+0.024714898 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:22:23 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:22:23 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ef8008ba3162410f9c3ea3f52a4728eca1f9cc66423f0377779d6a8e2338da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:22:23 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ef8008ba3162410f9c3ea3f52a4728eca1f9cc66423f0377779d6a8e2338da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:22:23 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ef8008ba3162410f9c3ea3f52a4728eca1f9cc66423f0377779d6a8e2338da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:22:23 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ef8008ba3162410f9c3ea3f52a4728eca1f9cc66423f0377779d6a8e2338da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:22:23 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ef8008ba3162410f9c3ea3f52a4728eca1f9cc66423f0377779d6a8e2338da/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:22:23 np0005545273 podman[124658]: 2025-12-04 10:22:23.871402248 +0000 UTC m=+0.165220163 container init aec2c9dc454526f6b22bfcb934aed31033e07844ac3c89adef3fe907f5369978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:22:23 np0005545273 podman[124658]: 2025-12-04 10:22:23.881918731 +0000 UTC m=+0.175736616 container start aec2c9dc454526f6b22bfcb934aed31033e07844ac3c89adef3fe907f5369978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_galileo, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  4 05:22:23 np0005545273 podman[124658]: 2025-12-04 10:22:23.892132386 +0000 UTC m=+0.185950301 container attach aec2c9dc454526f6b22bfcb934aed31033e07844ac3c89adef3fe907f5369978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_galileo, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec  4 05:22:23 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:22:23 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:22:23 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:22:24 np0005545273 quirky_galileo[124676]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:22:24 np0005545273 quirky_galileo[124676]: --> All data devices are unavailable
Dec  4 05:22:24 np0005545273 systemd[1]: libpod-aec2c9dc454526f6b22bfcb934aed31033e07844ac3c89adef3fe907f5369978.scope: Deactivated successfully.
Dec  4 05:22:24 np0005545273 podman[124658]: 2025-12-04 10:22:24.353214449 +0000 UTC m=+0.647032334 container died aec2c9dc454526f6b22bfcb934aed31033e07844ac3c89adef3fe907f5369978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_galileo, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  4 05:22:24 np0005545273 systemd[1]: var-lib-containers-storage-overlay-d4ef8008ba3162410f9c3ea3f52a4728eca1f9cc66423f0377779d6a8e2338da-merged.mount: Deactivated successfully.
Dec  4 05:22:24 np0005545273 podman[124658]: 2025-12-04 10:22:24.436735954 +0000 UTC m=+0.730553839 container remove aec2c9dc454526f6b22bfcb934aed31033e07844ac3c89adef3fe907f5369978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_galileo, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:22:24 np0005545273 systemd[1]: libpod-conmon-aec2c9dc454526f6b22bfcb934aed31033e07844ac3c89adef3fe907f5369978.scope: Deactivated successfully.
Dec  4 05:22:24 np0005545273 python3.9[124815]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.h5jwro0g mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764843743.3415232-44-243326432585835/.source.h5jwro0g _original_basename=.lwcmjfx6 follow=False checksum=10f9bff719ccd38a8a0d0cdbb472b912e28b2576 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:22:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:22:24 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v347: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:22:24 np0005545273 podman[124971]: 2025-12-04 10:22:24.890664799 +0000 UTC m=+0.039887426 container create 04159769dbfa04aacab85d3ad935529580238e9c9d169b391e3721804ec49f77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_carver, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec  4 05:22:24 np0005545273 systemd[1]: Started libpod-conmon-04159769dbfa04aacab85d3ad935529580238e9c9d169b391e3721804ec49f77.scope.
Dec  4 05:22:24 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:22:24 np0005545273 podman[124971]: 2025-12-04 10:22:24.873680965 +0000 UTC m=+0.022903592 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:22:24 np0005545273 podman[124971]: 2025-12-04 10:22:24.973471455 +0000 UTC m=+0.122694092 container init 04159769dbfa04aacab85d3ad935529580238e9c9d169b391e3721804ec49f77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_carver, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  4 05:22:24 np0005545273 podman[124971]: 2025-12-04 10:22:24.981798163 +0000 UTC m=+0.131020780 container start 04159769dbfa04aacab85d3ad935529580238e9c9d169b391e3721804ec49f77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec  4 05:22:24 np0005545273 podman[124971]: 2025-12-04 10:22:24.985254349 +0000 UTC m=+0.134476986 container attach 04159769dbfa04aacab85d3ad935529580238e9c9d169b391e3721804ec49f77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec  4 05:22:24 np0005545273 inspiring_carver[124987]: 167 167
Dec  4 05:22:24 np0005545273 systemd[1]: libpod-04159769dbfa04aacab85d3ad935529580238e9c9d169b391e3721804ec49f77.scope: Deactivated successfully.
Dec  4 05:22:24 np0005545273 podman[124971]: 2025-12-04 10:22:24.988400778 +0000 UTC m=+0.137623405 container died 04159769dbfa04aacab85d3ad935529580238e9c9d169b391e3721804ec49f77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_carver, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  4 05:22:25 np0005545273 systemd[1]: var-lib-containers-storage-overlay-6b74a20630c1cb587188436e485f8d656fd427b43539727c5ab70a99c2ec5c0c-merged.mount: Deactivated successfully.
Dec  4 05:22:25 np0005545273 podman[124971]: 2025-12-04 10:22:25.028173059 +0000 UTC m=+0.177395676 container remove 04159769dbfa04aacab85d3ad935529580238e9c9d169b391e3721804ec49f77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_carver, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec  4 05:22:25 np0005545273 systemd[1]: libpod-conmon-04159769dbfa04aacab85d3ad935529580238e9c9d169b391e3721804ec49f77.scope: Deactivated successfully.
Dec  4 05:22:25 np0005545273 podman[125057]: 2025-12-04 10:22:25.191616258 +0000 UTC m=+0.047048206 container create f66e1403f658d1302f48291d921bb5fd5b5a290e2ef6566b71a748d14330da4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec  4 05:22:25 np0005545273 systemd[1]: Started libpod-conmon-f66e1403f658d1302f48291d921bb5fd5b5a290e2ef6566b71a748d14330da4c.scope.
Dec  4 05:22:25 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:22:25 np0005545273 podman[125057]: 2025-12-04 10:22:25.168959192 +0000 UTC m=+0.024391170 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:22:25 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9de11e2fa0835cc184ff1de7389abcb73d21d7eb3b1407263b179285ace0250/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:22:25 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9de11e2fa0835cc184ff1de7389abcb73d21d7eb3b1407263b179285ace0250/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:22:25 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9de11e2fa0835cc184ff1de7389abcb73d21d7eb3b1407263b179285ace0250/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:22:25 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9de11e2fa0835cc184ff1de7389abcb73d21d7eb3b1407263b179285ace0250/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:22:25 np0005545273 podman[125057]: 2025-12-04 10:22:25.280384542 +0000 UTC m=+0.135816530 container init f66e1403f658d1302f48291d921bb5fd5b5a290e2ef6566b71a748d14330da4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_bhabha, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:22:25 np0005545273 podman[125057]: 2025-12-04 10:22:25.287682545 +0000 UTC m=+0.143114503 container start f66e1403f658d1302f48291d921bb5fd5b5a290e2ef6566b71a748d14330da4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  4 05:22:25 np0005545273 podman[125057]: 2025-12-04 10:22:25.294400922 +0000 UTC m=+0.149832880 container attach f66e1403f658d1302f48291d921bb5fd5b5a290e2ef6566b71a748d14330da4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec  4 05:22:25 np0005545273 python3.9[125104]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]: {
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:    "0": [
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:        {
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            "devices": [
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "/dev/loop3"
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            ],
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            "lv_name": "ceph_lv0",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            "lv_size": "21470642176",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            "name": "ceph_lv0",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            "tags": {
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.cluster_name": "ceph",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.crush_device_class": "",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.encrypted": "0",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.objectstore": "bluestore",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.osd_id": "0",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.type": "block",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.vdo": "0",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.with_tpm": "0"
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            },
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            "type": "block",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            "vg_name": "ceph_vg0"
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:        }
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:    ],
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:    "1": [
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:        {
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            "devices": [
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "/dev/loop4"
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            ],
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            "lv_name": "ceph_lv1",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            "lv_size": "21470642176",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            "name": "ceph_lv1",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            "tags": {
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.cluster_name": "ceph",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.crush_device_class": "",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.encrypted": "0",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.objectstore": "bluestore",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.osd_id": "1",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.type": "block",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.vdo": "0",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.with_tpm": "0"
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            },
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            "type": "block",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            "vg_name": "ceph_vg1"
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:        }
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:    ],
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:    "2": [
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:        {
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            "devices": [
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "/dev/loop5"
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            ],
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            "lv_name": "ceph_lv2",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            "lv_size": "21470642176",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            "name": "ceph_lv2",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            "tags": {
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.cluster_name": "ceph",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.crush_device_class": "",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.encrypted": "0",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.objectstore": "bluestore",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.osd_id": "2",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.type": "block",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.vdo": "0",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:                "ceph.with_tpm": "0"
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            },
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            "type": "block",
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:            "vg_name": "ceph_vg2"
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:        }
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]:    ]
Dec  4 05:22:25 np0005545273 distracted_bhabha[125102]: }
Dec  4 05:22:25 np0005545273 systemd[1]: libpod-f66e1403f658d1302f48291d921bb5fd5b5a290e2ef6566b71a748d14330da4c.scope: Deactivated successfully.
Dec  4 05:22:25 np0005545273 podman[125057]: 2025-12-04 10:22:25.604624612 +0000 UTC m=+0.460056560 container died f66e1403f658d1302f48291d921bb5fd5b5a290e2ef6566b71a748d14330da4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  4 05:22:25 np0005545273 systemd[1]: var-lib-containers-storage-overlay-c9de11e2fa0835cc184ff1de7389abcb73d21d7eb3b1407263b179285ace0250-merged.mount: Deactivated successfully.
Dec  4 05:22:25 np0005545273 podman[125057]: 2025-12-04 10:22:25.673003418 +0000 UTC m=+0.528435376 container remove f66e1403f658d1302f48291d921bb5fd5b5a290e2ef6566b71a748d14330da4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_bhabha, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec  4 05:22:25 np0005545273 systemd[1]: libpod-conmon-f66e1403f658d1302f48291d921bb5fd5b5a290e2ef6566b71a748d14330da4c.scope: Deactivated successfully.
Dec  4 05:22:26 np0005545273 podman[125261]: 2025-12-04 10:22:26.13881404 +0000 UTC m=+0.024804750 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:22:26 np0005545273 podman[125261]: 2025-12-04 10:22:26.352402819 +0000 UTC m=+0.238393499 container create 9a6f136e09924e477170fd719c281efefa4bc6d839f11a96e641a60f979d1a93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bell, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:22:26 np0005545273 systemd[1]: Started libpod-conmon-9a6f136e09924e477170fd719c281efefa4bc6d839f11a96e641a60f979d1a93.scope.
Dec  4 05:22:26 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:22:26 np0005545273 podman[125261]: 2025-12-04 10:22:26.45465748 +0000 UTC m=+0.340648180 container init 9a6f136e09924e477170fd719c281efefa4bc6d839f11a96e641a60f979d1a93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:22:26 np0005545273 podman[125261]: 2025-12-04 10:22:26.461067491 +0000 UTC m=+0.347058171 container start 9a6f136e09924e477170fd719c281efefa4bc6d839f11a96e641a60f979d1a93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:22:26 np0005545273 podman[125261]: 2025-12-04 10:22:26.464502466 +0000 UTC m=+0.350493166 container attach 9a6f136e09924e477170fd719c281efefa4bc6d839f11a96e641a60f979d1a93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bell, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec  4 05:22:26 np0005545273 peaceful_bell[125353]: 167 167
Dec  4 05:22:26 np0005545273 systemd[1]: libpod-9a6f136e09924e477170fd719c281efefa4bc6d839f11a96e641a60f979d1a93.scope: Deactivated successfully.
Dec  4 05:22:26 np0005545273 podman[125261]: 2025-12-04 10:22:26.466377592 +0000 UTC m=+0.352368272 container died 9a6f136e09924e477170fd719c281efefa4bc6d839f11a96e641a60f979d1a93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bell, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  4 05:22:26 np0005545273 systemd[1]: var-lib-containers-storage-overlay-8e3f7396a9f9aa3b78c185cdd8a113bd75edc84cbc98630e733e6fa8ee97261d-merged.mount: Deactivated successfully.
Dec  4 05:22:26 np0005545273 python3.9[125350]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBaDrGsfyH66GeTPneOf4P9cqhJJcxgP3bu0E7RAjEstx4o7NevlnfodrpsWI3GhJ5z8ru5yYrnT8gj6K/RfM5zjWXW+Ul4lDWJ1UnIBsqOM+qHdwpyOanGFwsD1SStOqDLQRPhop1d9LdePkBXvJSXJ80Mpcjwm1bfGwN/fJl8zLFWskfkIYThTGAzthtkHNPXQXTBX+VOKpcthU/qN5CP8Y/w/9w96vwq/0dHExjueOOk28BTWEQCwxPpkb1Wrd6hQ3KYnZye2JOZh3qqNaX44hPg8VLhv3agVerNv6vRiI2EbdHHYD2I5gXfV7bQGhRzhpFEZm2DfYLr5b8H1kG9ocx3KHW2+TctXCO2hCdJhjjuQQb033in90uXPuMsEEvmtCnc5vbJ5DKpgiaJysNZhmTkpKiJ4UVa6HeBh3riio7zeHc3bjI/1AD1cejpy6OEoWwk/X8ydA6bau1ApGvoHoEAXhlES4J/a6CUovnch+uMkircx8hJcYthuNhJIk=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDBhSkNncUNzxmzyjy22XSoHmC2WfRWk9PEzKRLlibq2#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBBeg0yEcOxT9ax0vZC/VGcWoLt2isE/U7UTL1uRpP8q51Um5h2uaP4tcFVGL1g6uXlC20O3SCTRskwpUg5sj6I=#012 create=True mode=0644 path=/tmp/ansible.h5jwro0g state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:22:26 np0005545273 podman[125261]: 2025-12-04 10:22:26.506793601 +0000 UTC m=+0.392784301 container remove 9a6f136e09924e477170fd719c281efefa4bc6d839f11a96e641a60f979d1a93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bell, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  4 05:22:26 np0005545273 systemd[1]: libpod-conmon-9a6f136e09924e477170fd719c281efefa4bc6d839f11a96e641a60f979d1a93.scope: Deactivated successfully.
Dec  4 05:22:26 np0005545273 podman[125402]: 2025-12-04 10:22:26.656591968 +0000 UTC m=+0.039875215 container create fbde47c2d8f2fc1536e4363af7a36f69c543db93c281036c7565cc53ba0beeef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_meitner, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec  4 05:22:26 np0005545273 systemd[1]: Started libpod-conmon-fbde47c2d8f2fc1536e4363af7a36f69c543db93c281036c7565cc53ba0beeef.scope.
Dec  4 05:22:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:22:26
Dec  4 05:22:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:22:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:22:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['volumes', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'vms', '.rgw.root', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups']
Dec  4 05:22:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:22:26 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:22:26 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fcfff10130993454ecf81993e8c5d8e61091a42a5bdd4fef76b65beabc2de9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:22:26 np0005545273 podman[125402]: 2025-12-04 10:22:26.639989404 +0000 UTC m=+0.023272671 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:22:26 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fcfff10130993454ecf81993e8c5d8e61091a42a5bdd4fef76b65beabc2de9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:22:26 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fcfff10130993454ecf81993e8c5d8e61091a42a5bdd4fef76b65beabc2de9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:22:26 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fcfff10130993454ecf81993e8c5d8e61091a42a5bdd4fef76b65beabc2de9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:22:26 np0005545273 podman[125402]: 2025-12-04 10:22:26.745004284 +0000 UTC m=+0.128287551 container init fbde47c2d8f2fc1536e4363af7a36f69c543db93c281036c7565cc53ba0beeef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True)
Dec  4 05:22:26 np0005545273 podman[125402]: 2025-12-04 10:22:26.75323768 +0000 UTC m=+0.136520927 container start fbde47c2d8f2fc1536e4363af7a36f69c543db93c281036c7565cc53ba0beeef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_meitner, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec  4 05:22:26 np0005545273 podman[125402]: 2025-12-04 10:22:26.755669321 +0000 UTC m=+0.138952568 container attach fbde47c2d8f2fc1536e4363af7a36f69c543db93c281036c7565cc53ba0beeef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_meitner, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec  4 05:22:26 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v348: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:22:27 np0005545273 python3.9[125563]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.h5jwro0g' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:22:27 np0005545273 lvm[125674]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:22:27 np0005545273 lvm[125674]: VG ceph_vg0 finished
Dec  4 05:22:27 np0005545273 lvm[125675]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:22:27 np0005545273 lvm[125675]: VG ceph_vg1 finished
Dec  4 05:22:27 np0005545273 lvm[125681]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:22:27 np0005545273 lvm[125681]: VG ceph_vg2 finished
Dec  4 05:22:27 np0005545273 gracious_meitner[125466]: {}
Dec  4 05:22:27 np0005545273 systemd[1]: libpod-fbde47c2d8f2fc1536e4363af7a36f69c543db93c281036c7565cc53ba0beeef.scope: Deactivated successfully.
Dec  4 05:22:27 np0005545273 systemd[1]: libpod-fbde47c2d8f2fc1536e4363af7a36f69c543db93c281036c7565cc53ba0beeef.scope: Consumed 1.379s CPU time.
Dec  4 05:22:27 np0005545273 podman[125402]: 2025-12-04 10:22:27.645990554 +0000 UTC m=+1.029273801 container died fbde47c2d8f2fc1536e4363af7a36f69c543db93c281036c7565cc53ba0beeef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_meitner, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:22:27 np0005545273 systemd[1]: var-lib-containers-storage-overlay-6fcfff10130993454ecf81993e8c5d8e61091a42a5bdd4fef76b65beabc2de9a-merged.mount: Deactivated successfully.
Dec  4 05:22:27 np0005545273 podman[125402]: 2025-12-04 10:22:27.68991034 +0000 UTC m=+1.073193587 container remove fbde47c2d8f2fc1536e4363af7a36f69c543db93c281036c7565cc53ba0beeef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_meitner, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  4 05:22:27 np0005545273 systemd[1]: libpod-conmon-fbde47c2d8f2fc1536e4363af7a36f69c543db93c281036c7565cc53ba0beeef.scope: Deactivated successfully.
Dec  4 05:22:27 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:22:27 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:22:27 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:22:27 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:22:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:22:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:22:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:22:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:22:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:22:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:22:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:22:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:22:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:22:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:22:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:22:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:22:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:22:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:22:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:22:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:22:28 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:22:28 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:22:28 np0005545273 python3.9[125822]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.h5jwro0g state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:22:28 np0005545273 systemd[1]: session-41.scope: Deactivated successfully.
Dec  4 05:22:28 np0005545273 systemd[1]: session-41.scope: Consumed 4.800s CPU time.
Dec  4 05:22:28 np0005545273 systemd-logind[798]: Session 41 logged out. Waiting for processes to exit.
Dec  4 05:22:28 np0005545273 systemd-logind[798]: Removed session 41.
Dec  4 05:22:28 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v349: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:22:28 np0005545273 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  4 05:22:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:22:30 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v350: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:22:32 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v351: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:22:33 np0005545273 systemd-logind[798]: New session 42 of user zuul.
Dec  4 05:22:33 np0005545273 systemd[1]: Started Session 42 of User zuul.
Dec  4 05:22:34 np0005545273 python3.9[126003]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:22:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:22:34 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v352: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:22:36 np0005545273 python3.9[126159]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  4 05:22:36 np0005545273 python3.9[126313]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  4 05:22:36 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v353: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:22:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:22:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:22:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:22:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:22:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:22:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:22:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:22:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:22:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:22:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:22:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:22:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:22:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec  4 05:22:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:22:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:22:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:22:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:22:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:22:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 05:22:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:22:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:22:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:22:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 05:22:37 np0005545273 python3.9[126466]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:22:38 np0005545273 python3.9[126619]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:22:38 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v354: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:22:39 np0005545273 python3.9[126771]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:22:39 np0005545273 systemd[1]: session-42.scope: Deactivated successfully.
Dec  4 05:22:39 np0005545273 systemd[1]: session-42.scope: Consumed 3.575s CPU time.
Dec  4 05:22:39 np0005545273 systemd-logind[798]: Session 42 logged out. Waiting for processes to exit.
Dec  4 05:22:39 np0005545273 systemd-logind[798]: Removed session 42.
Dec  4 05:22:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:22:40 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v355: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:22:42 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v356: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:22:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:22:44 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v357: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:22:44 np0005545273 systemd-logind[798]: New session 43 of user zuul.
Dec  4 05:22:44 np0005545273 systemd[1]: Started Session 43 of User zuul.
Dec  4 05:22:45 np0005545273 systemd[1]: session-18.scope: Deactivated successfully.
Dec  4 05:22:45 np0005545273 systemd[1]: session-18.scope: Consumed 1min 49.879s CPU time.
Dec  4 05:22:45 np0005545273 systemd-logind[798]: Session 18 logged out. Waiting for processes to exit.
Dec  4 05:22:45 np0005545273 systemd-logind[798]: Removed session 18.
Dec  4 05:22:45 np0005545273 python3.9[126949]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:22:46 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v358: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:22:46 np0005545273 python3.9[127105]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  4 05:22:47 np0005545273 python3.9[127189]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  4 05:22:48 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v359: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:22:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:22:49 np0005545273 python3.9[127340]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:22:50 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v360: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:22:51 np0005545273 python3.9[127491]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  4 05:22:51 np0005545273 python3.9[127641]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:22:52 np0005545273 python3.9[127793]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:22:52 np0005545273 systemd[1]: session-43.scope: Deactivated successfully.
Dec  4 05:22:52 np0005545273 systemd[1]: session-43.scope: Consumed 5.679s CPU time.
Dec  4 05:22:52 np0005545273 systemd-logind[798]: Session 43 logged out. Waiting for processes to exit.
Dec  4 05:22:52 np0005545273 systemd-logind[798]: Removed session 43.
Dec  4 05:22:52 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v361: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:22:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:22:54 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v362: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:22:56 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v363: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:22:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:22:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:22:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:22:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:22:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:22:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:22:58 np0005545273 systemd-logind[798]: New session 44 of user zuul.
Dec  4 05:22:58 np0005545273 systemd[1]: Started Session 44 of User zuul.
Dec  4 05:22:58 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v364: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:22:59 np0005545273 python3.9[127973]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:23:00 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v365: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:23:00 np0005545273 python3.9[128129]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:23:01 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:23:01 np0005545273 python3.9[128281]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:23:02 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v366: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:23:03 np0005545273 python3.9[128433]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:23:03 np0005545273 python3.9[128556]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843781.7983155-65-106733408814055/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=81511a8c029290643b20bb87c9f35389df2dbe4b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:23:04 np0005545273 python3.9[128708]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:23:04 np0005545273 python3.9[128831]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843783.8368237-65-13810163128928/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=c3374d4610bc0ee65063b6de1905070784021c61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:23:04 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v367: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:23:05 np0005545273 python3.9[128983]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:23:06 np0005545273 python3.9[129106]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843784.9504771-65-138905972288115/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=d43a22e1b31002b3767a06d3002da3fc47ddce6d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:23:06 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:23:06 np0005545273 python3.9[129258]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:23:06 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v368: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:23:08 np0005545273 python3.9[129410]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:23:08 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v369: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:23:08 np0005545273 python3.9[129564]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:23:09 np0005545273 python3.9[129687]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843788.4837463-124-195150463012650/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=5034f68dea2e01161d2dd1c287333174d79beab6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:23:10 np0005545273 python3.9[129839]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:23:10 np0005545273 python3.9[129962]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843789.5711393-124-199144796876179/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=bc95c030718f7b888fdaa320eb4dd80dc2a36cf0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:23:10 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v370: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:23:11 np0005545273 python3.9[130114]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:23:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:23:11 np0005545273 python3.9[130237]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843790.661143-124-261133524993298/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=a6000cc946b5cfb01bb3913e9a943dfd39c04e6f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:23:12 np0005545273 python3.9[130390]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:23:12 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v371: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:23:13 np0005545273 python3.9[130543]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:23:13 np0005545273 python3.9[130696]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:23:14 np0005545273 python3.9[130819]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843793.2571468-183-147177057385642/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=865674be7d17eaa7bdfa20885df08114fc86c2da backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:23:14 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v372: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:23:14 np0005545273 python3.9[130971]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:23:15 np0005545273 python3.9[131094]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843794.3754284-183-104863129007651/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=bc95c030718f7b888fdaa320eb4dd80dc2a36cf0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:23:16 np0005545273 python3.9[131246]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:23:16 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:23:16 np0005545273 python3.9[131369]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843795.644625-183-63321839445067/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=d3da86dcc4ab46f92c4982f6a65d424bec9319aa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:23:16 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v373: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:23:17 np0005545273 python3.9[131521]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:23:18 np0005545273 python3.9[131674]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:23:18 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v374: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:23:18 np0005545273 python3.9[131797]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843797.9484406-251-131388131631049/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=30ac9e0c3193352f9a52990ef0ec51829bcb5137 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:23:19 np0005545273 python3.9[131949]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:23:20 np0005545273 python3.9[132101]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:23:20 np0005545273 python3.9[132224]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843799.7225451-275-178684544813999/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=30ac9e0c3193352f9a52990ef0ec51829bcb5137 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:23:20 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v375: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:23:21 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:23:21 np0005545273 python3.9[132376]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:23:22 np0005545273 python3.9[132528]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:23:22 np0005545273 python3.9[132651]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843801.4869964-299-194653593042473/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=30ac9e0c3193352f9a52990ef0ec51829bcb5137 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:23:22 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v376: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:23:23 np0005545273 python3.9[132803]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:23:23 np0005545273 python3.9[132955]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:23:24 np0005545273 python3.9[133078]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843803.4043937-323-89993190549934/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=30ac9e0c3193352f9a52990ef0ec51829bcb5137 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:23:24 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v377: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:23:25 np0005545273 python3.9[133230]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:23:25 np0005545273 python3.9[133382]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:23:26 np0005545273 python3.9[133505]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843805.210789-347-96209227728674/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=30ac9e0c3193352f9a52990ef0ec51829bcb5137 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:23:26 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:23:26 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Dec  4 05:23:26 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:23:26.301177) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  4 05:23:26 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Dec  4 05:23:26 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843806301425, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1569, "num_deletes": 251, "total_data_size": 2217000, "memory_usage": 2256840, "flush_reason": "Manual Compaction"}
Dec  4 05:23:26 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Dec  4 05:23:26 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843806310752, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1304364, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7390, "largest_seqno": 8958, "table_properties": {"data_size": 1299180, "index_size": 2260, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 15196, "raw_average_key_size": 20, "raw_value_size": 1286907, "raw_average_value_size": 1750, "num_data_blocks": 106, "num_entries": 735, "num_filter_entries": 735, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843661, "oldest_key_time": 1764843661, "file_creation_time": 1764843806, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:23:26 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 9643 microseconds, and 4715 cpu microseconds.
Dec  4 05:23:26 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  4 05:23:26 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:23:26.310827) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1304364 bytes OK
Dec  4 05:23:26 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:23:26.310864) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Dec  4 05:23:26 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:23:26.313526) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Dec  4 05:23:26 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:23:26.313550) EVENT_LOG_v1 {"time_micros": 1764843806313541, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  4 05:23:26 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:23:26.313577) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  4 05:23:26 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2209966, prev total WAL file size 2209966, number of live WAL files 2.
Dec  4 05:23:26 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:23:26 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:23:26.314704) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Dec  4 05:23:26 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  4 05:23:26 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1273KB)], [20(7320KB)]
Dec  4 05:23:26 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843806314771, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 8800306, "oldest_snapshot_seqno": -1}
Dec  4 05:23:26 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3397 keys, 6883511 bytes, temperature: kUnknown
Dec  4 05:23:26 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843806370332, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 6883511, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6857900, "index_size": 16030, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8517, "raw_key_size": 81480, "raw_average_key_size": 23, "raw_value_size": 6793596, "raw_average_value_size": 1999, "num_data_blocks": 710, "num_entries": 3397, "num_filter_entries": 3397, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 1764843806, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:23:26 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  4 05:23:26 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:23:26.370586) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 6883511 bytes
Dec  4 05:23:26 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:23:26.371971) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.2 rd, 123.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 7.1 +0.0 blob) out(6.6 +0.0 blob), read-write-amplify(12.0) write-amplify(5.3) OK, records in: 3845, records dropped: 448 output_compression: NoCompression
Dec  4 05:23:26 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:23:26.371991) EVENT_LOG_v1 {"time_micros": 1764843806371981, "job": 6, "event": "compaction_finished", "compaction_time_micros": 55637, "compaction_time_cpu_micros": 21632, "output_level": 6, "num_output_files": 1, "total_output_size": 6883511, "num_input_records": 3845, "num_output_records": 3397, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  4 05:23:26 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:23:26 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843806372376, "job": 6, "event": "table_file_deletion", "file_number": 22}
Dec  4 05:23:26 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:23:26 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843806373705, "job": 6, "event": "table_file_deletion", "file_number": 20}
Dec  4 05:23:26 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:23:26.314610) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:23:26 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:23:26.373759) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:23:26 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:23:26.373765) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:23:26 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:23:26.373768) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:23:26 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:23:26.373770) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:23:26 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:23:26.373773) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:23:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:23:26
Dec  4 05:23:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:23:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:23:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'vms', 'images', '.rgw.root']
Dec  4 05:23:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:23:26 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v378: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:23:26 np0005545273 python3.9[133657]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:23:27 np0005545273 python3.9[133809]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:23:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:23:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:23:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:23:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:23:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:23:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:23:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:23:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:23:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:23:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:23:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:23:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:23:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:23:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:23:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:23:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:23:28 np0005545273 python3.9[133982]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843807.0952694-371-277953686617442/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=30ac9e0c3193352f9a52990ef0ec51829bcb5137 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:23:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:23:28 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:23:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:23:28 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:23:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:23:28 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:23:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:23:28 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:23:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:23:28 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:23:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:23:28 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:23:28 np0005545273 systemd-logind[798]: Session 44 logged out. Waiting for processes to exit.
Dec  4 05:23:28 np0005545273 systemd[1]: session-44.scope: Deactivated successfully.
Dec  4 05:23:28 np0005545273 systemd[1]: session-44.scope: Consumed 22.843s CPU time.
Dec  4 05:23:28 np0005545273 systemd-logind[798]: Removed session 44.
Dec  4 05:23:28 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:23:28 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:23:28 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:23:28 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v379: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:23:28 np0005545273 podman[134100]: 2025-12-04 10:23:28.897973475 +0000 UTC m=+0.026424195 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:23:29 np0005545273 podman[134100]: 2025-12-04 10:23:29.101979596 +0000 UTC m=+0.230430336 container create f94365e17959459ce849b4db07db691cdc17b48c9d1c6e461c63f596fa9d08ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:23:29 np0005545273 systemd[1]: Started libpod-conmon-f94365e17959459ce849b4db07db691cdc17b48c9d1c6e461c63f596fa9d08ea.scope.
Dec  4 05:23:29 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:23:29 np0005545273 podman[134100]: 2025-12-04 10:23:29.540683291 +0000 UTC m=+0.669134021 container init f94365e17959459ce849b4db07db691cdc17b48c9d1c6e461c63f596fa9d08ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:23:29 np0005545273 podman[134100]: 2025-12-04 10:23:29.547976426 +0000 UTC m=+0.676427126 container start f94365e17959459ce849b4db07db691cdc17b48c9d1c6e461c63f596fa9d08ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_lichterman, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  4 05:23:29 np0005545273 cool_lichterman[134119]: 167 167
Dec  4 05:23:29 np0005545273 systemd[1]: libpod-f94365e17959459ce849b4db07db691cdc17b48c9d1c6e461c63f596fa9d08ea.scope: Deactivated successfully.
Dec  4 05:23:29 np0005545273 podman[134100]: 2025-12-04 10:23:29.62276625 +0000 UTC m=+0.751216960 container attach f94365e17959459ce849b4db07db691cdc17b48c9d1c6e461c63f596fa9d08ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_lichterman, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:23:29 np0005545273 podman[134100]: 2025-12-04 10:23:29.623498397 +0000 UTC m=+0.751949117 container died f94365e17959459ce849b4db07db691cdc17b48c9d1c6e461c63f596fa9d08ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True)
Dec  4 05:23:30 np0005545273 systemd[1]: var-lib-containers-storage-overlay-c0b7f7e3bd34fedf2d95aff95ee6a4f7517cd568a00ae2605a750644096ba854-merged.mount: Deactivated successfully.
Dec  4 05:23:30 np0005545273 podman[134100]: 2025-12-04 10:23:30.079313093 +0000 UTC m=+1.207763783 container remove f94365e17959459ce849b4db07db691cdc17b48c9d1c6e461c63f596fa9d08ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_lichterman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:23:30 np0005545273 systemd[1]: libpod-conmon-f94365e17959459ce849b4db07db691cdc17b48c9d1c6e461c63f596fa9d08ea.scope: Deactivated successfully.
Dec  4 05:23:30 np0005545273 podman[134144]: 2025-12-04 10:23:30.266503699 +0000 UTC m=+0.060167823 container create 98e925ca259e4abe16ef63ebb0e6df54b8683c502911eca93f3873c8d2c4702d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_chandrasekhar, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  4 05:23:30 np0005545273 systemd[1]: Started libpod-conmon-98e925ca259e4abe16ef63ebb0e6df54b8683c502911eca93f3873c8d2c4702d.scope.
Dec  4 05:23:30 np0005545273 podman[134144]: 2025-12-04 10:23:30.241563142 +0000 UTC m=+0.035227316 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:23:30 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:23:30 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d928c8edcd59ece2d74ecf82f32634e584c39605e3ac604d85bc9ce4cd6ba1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:23:30 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d928c8edcd59ece2d74ecf82f32634e584c39605e3ac604d85bc9ce4cd6ba1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:23:30 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d928c8edcd59ece2d74ecf82f32634e584c39605e3ac604d85bc9ce4cd6ba1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:23:30 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d928c8edcd59ece2d74ecf82f32634e584c39605e3ac604d85bc9ce4cd6ba1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:23:30 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d928c8edcd59ece2d74ecf82f32634e584c39605e3ac604d85bc9ce4cd6ba1e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:23:30 np0005545273 podman[134144]: 2025-12-04 10:23:30.366507347 +0000 UTC m=+0.160171521 container init 98e925ca259e4abe16ef63ebb0e6df54b8683c502911eca93f3873c8d2c4702d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_chandrasekhar, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:23:30 np0005545273 podman[134144]: 2025-12-04 10:23:30.376736332 +0000 UTC m=+0.170400456 container start 98e925ca259e4abe16ef63ebb0e6df54b8683c502911eca93f3873c8d2c4702d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_chandrasekhar, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec  4 05:23:30 np0005545273 podman[134144]: 2025-12-04 10:23:30.380601025 +0000 UTC m=+0.174265179 container attach 98e925ca259e4abe16ef63ebb0e6df54b8683c502911eca93f3873c8d2c4702d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_chandrasekhar, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  4 05:23:30 np0005545273 dazzling_chandrasekhar[134160]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:23:30 np0005545273 dazzling_chandrasekhar[134160]: --> All data devices are unavailable
Dec  4 05:23:30 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v380: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:23:30 np0005545273 systemd[1]: libpod-98e925ca259e4abe16ef63ebb0e6df54b8683c502911eca93f3873c8d2c4702d.scope: Deactivated successfully.
Dec  4 05:23:30 np0005545273 podman[134144]: 2025-12-04 10:23:30.895707973 +0000 UTC m=+0.689372117 container died 98e925ca259e4abe16ef63ebb0e6df54b8683c502911eca93f3873c8d2c4702d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_chandrasekhar, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  4 05:23:30 np0005545273 systemd[1]: var-lib-containers-storage-overlay-9d928c8edcd59ece2d74ecf82f32634e584c39605e3ac604d85bc9ce4cd6ba1e-merged.mount: Deactivated successfully.
Dec  4 05:23:30 np0005545273 podman[134144]: 2025-12-04 10:23:30.948692213 +0000 UTC m=+0.742356337 container remove 98e925ca259e4abe16ef63ebb0e6df54b8683c502911eca93f3873c8d2c4702d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_chandrasekhar, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  4 05:23:30 np0005545273 systemd[1]: libpod-conmon-98e925ca259e4abe16ef63ebb0e6df54b8683c502911eca93f3873c8d2c4702d.scope: Deactivated successfully.
Dec  4 05:23:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:23:31 np0005545273 podman[134253]: 2025-12-04 10:23:31.369234323 +0000 UTC m=+0.022016269 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:23:31 np0005545273 podman[134253]: 2025-12-04 10:23:31.683278911 +0000 UTC m=+0.336060837 container create 71082ad76562c9a6e51c8143444bd1fa912e82cbab7e881dadb0a8bbca417d1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030)
Dec  4 05:23:31 np0005545273 systemd[1]: Started libpod-conmon-71082ad76562c9a6e51c8143444bd1fa912e82cbab7e881dadb0a8bbca417d1c.scope.
Dec  4 05:23:31 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:23:31 np0005545273 podman[134253]: 2025-12-04 10:23:31.771464605 +0000 UTC m=+0.424246541 container init 71082ad76562c9a6e51c8143444bd1fa912e82cbab7e881dadb0a8bbca417d1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jones, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:23:31 np0005545273 podman[134253]: 2025-12-04 10:23:31.777306635 +0000 UTC m=+0.430088561 container start 71082ad76562c9a6e51c8143444bd1fa912e82cbab7e881dadb0a8bbca417d1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jones, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  4 05:23:31 np0005545273 podman[134253]: 2025-12-04 10:23:31.781491775 +0000 UTC m=+0.434273701 container attach 71082ad76562c9a6e51c8143444bd1fa912e82cbab7e881dadb0a8bbca417d1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jones, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec  4 05:23:31 np0005545273 ecstatic_jones[134269]: 167 167
Dec  4 05:23:31 np0005545273 systemd[1]: libpod-71082ad76562c9a6e51c8143444bd1fa912e82cbab7e881dadb0a8bbca417d1c.scope: Deactivated successfully.
Dec  4 05:23:31 np0005545273 podman[134253]: 2025-12-04 10:23:31.78293271 +0000 UTC m=+0.435714646 container died 71082ad76562c9a6e51c8143444bd1fa912e82cbab7e881dadb0a8bbca417d1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jones, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec  4 05:23:31 np0005545273 systemd[1]: var-lib-containers-storage-overlay-7fd8cd0b30f7fe793d36cebe86756277096abdbd5b63acbb4445ee76789f1c27-merged.mount: Deactivated successfully.
Dec  4 05:23:31 np0005545273 podman[134253]: 2025-12-04 10:23:31.821519755 +0000 UTC m=+0.474301681 container remove 71082ad76562c9a6e51c8143444bd1fa912e82cbab7e881dadb0a8bbca417d1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_jones, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True)
Dec  4 05:23:31 np0005545273 systemd[1]: libpod-conmon-71082ad76562c9a6e51c8143444bd1fa912e82cbab7e881dadb0a8bbca417d1c.scope: Deactivated successfully.
Dec  4 05:23:31 np0005545273 podman[134293]: 2025-12-04 10:23:31.975733522 +0000 UTC m=+0.041161178 container create 885902a8783044d3e38161f29ff8cb9182e2ad6c48347d1d43fc89cff4cc0e95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_elgamal, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:23:32 np0005545273 systemd[1]: Started libpod-conmon-885902a8783044d3e38161f29ff8cb9182e2ad6c48347d1d43fc89cff4cc0e95.scope.
Dec  4 05:23:32 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:23:32 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/600ade4b21b840257d2236297d7b728a4ecff70837b82cfd4c197480012f3689/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:23:32 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/600ade4b21b840257d2236297d7b728a4ecff70837b82cfd4c197480012f3689/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:23:32 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/600ade4b21b840257d2236297d7b728a4ecff70837b82cfd4c197480012f3689/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:23:32 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/600ade4b21b840257d2236297d7b728a4ecff70837b82cfd4c197480012f3689/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:23:32 np0005545273 podman[134293]: 2025-12-04 10:23:31.956145492 +0000 UTC m=+0.021573198 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:23:32 np0005545273 podman[134293]: 2025-12-04 10:23:32.063764622 +0000 UTC m=+0.129192288 container init 885902a8783044d3e38161f29ff8cb9182e2ad6c48347d1d43fc89cff4cc0e95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030)
Dec  4 05:23:32 np0005545273 podman[134293]: 2025-12-04 10:23:32.070472113 +0000 UTC m=+0.135899769 container start 885902a8783044d3e38161f29ff8cb9182e2ad6c48347d1d43fc89cff4cc0e95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_elgamal, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  4 05:23:32 np0005545273 podman[134293]: 2025-12-04 10:23:32.073880094 +0000 UTC m=+0.139307750 container attach 885902a8783044d3e38161f29ff8cb9182e2ad6c48347d1d43fc89cff4cc0e95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_elgamal, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]: {
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:    "0": [
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:        {
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            "devices": [
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "/dev/loop3"
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            ],
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            "lv_name": "ceph_lv0",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            "lv_size": "21470642176",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            "name": "ceph_lv0",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            "tags": {
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.cluster_name": "ceph",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.crush_device_class": "",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.encrypted": "0",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.objectstore": "bluestore",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.osd_id": "0",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.type": "block",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.vdo": "0",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.with_tpm": "0"
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            },
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            "type": "block",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            "vg_name": "ceph_vg0"
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:        }
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:    ],
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:    "1": [
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:        {
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            "devices": [
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "/dev/loop4"
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            ],
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            "lv_name": "ceph_lv1",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            "lv_size": "21470642176",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            "name": "ceph_lv1",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            "tags": {
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.cluster_name": "ceph",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.crush_device_class": "",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.encrypted": "0",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.objectstore": "bluestore",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.osd_id": "1",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.type": "block",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.vdo": "0",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.with_tpm": "0"
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            },
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            "type": "block",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            "vg_name": "ceph_vg1"
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:        }
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:    ],
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:    "2": [
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:        {
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            "devices": [
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "/dev/loop5"
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            ],
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            "lv_name": "ceph_lv2",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            "lv_size": "21470642176",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            "name": "ceph_lv2",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            "tags": {
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.cluster_name": "ceph",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.crush_device_class": "",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.encrypted": "0",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.objectstore": "bluestore",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.osd_id": "2",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.type": "block",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.vdo": "0",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:                "ceph.with_tpm": "0"
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            },
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            "type": "block",
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:            "vg_name": "ceph_vg2"
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:        }
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]:    ]
Dec  4 05:23:32 np0005545273 jovial_elgamal[134309]: }
Dec  4 05:23:32 np0005545273 systemd[1]: libpod-885902a8783044d3e38161f29ff8cb9182e2ad6c48347d1d43fc89cff4cc0e95.scope: Deactivated successfully.
Dec  4 05:23:32 np0005545273 podman[134293]: 2025-12-04 10:23:32.373089867 +0000 UTC m=+0.438517523 container died 885902a8783044d3e38161f29ff8cb9182e2ad6c48347d1d43fc89cff4cc0e95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_elgamal, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:23:32 np0005545273 systemd[1]: var-lib-containers-storage-overlay-600ade4b21b840257d2236297d7b728a4ecff70837b82cfd4c197480012f3689-merged.mount: Deactivated successfully.
Dec  4 05:23:32 np0005545273 podman[134293]: 2025-12-04 10:23:32.435222046 +0000 UTC m=+0.500649702 container remove 885902a8783044d3e38161f29ff8cb9182e2ad6c48347d1d43fc89cff4cc0e95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_elgamal, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:23:32 np0005545273 systemd[1]: libpod-conmon-885902a8783044d3e38161f29ff8cb9182e2ad6c48347d1d43fc89cff4cc0e95.scope: Deactivated successfully.
Dec  4 05:23:32 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v381: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:23:32 np0005545273 podman[134394]: 2025-12-04 10:23:32.895456139 +0000 UTC m=+0.038406052 container create 9f7d59fbd5eb0086b1ea4b871f3fc8347fed9af4a198f8ed4827968b2deb6750 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True)
Dec  4 05:23:32 np0005545273 systemd[1]: Started libpod-conmon-9f7d59fbd5eb0086b1ea4b871f3fc8347fed9af4a198f8ed4827968b2deb6750.scope.
Dec  4 05:23:32 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:23:32 np0005545273 podman[134394]: 2025-12-04 10:23:32.964919644 +0000 UTC m=+0.107869577 container init 9f7d59fbd5eb0086b1ea4b871f3fc8347fed9af4a198f8ed4827968b2deb6750 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0)
Dec  4 05:23:32 np0005545273 podman[134394]: 2025-12-04 10:23:32.969955044 +0000 UTC m=+0.112904957 container start 9f7d59fbd5eb0086b1ea4b871f3fc8347fed9af4a198f8ed4827968b2deb6750 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:23:32 np0005545273 podman[134394]: 2025-12-04 10:23:32.972779722 +0000 UTC m=+0.115729635 container attach 9f7d59fbd5eb0086b1ea4b871f3fc8347fed9af4a198f8ed4827968b2deb6750 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec  4 05:23:32 np0005545273 blissful_mccarthy[134410]: 167 167
Dec  4 05:23:32 np0005545273 systemd[1]: libpod-9f7d59fbd5eb0086b1ea4b871f3fc8347fed9af4a198f8ed4827968b2deb6750.scope: Deactivated successfully.
Dec  4 05:23:32 np0005545273 podman[134394]: 2025-12-04 10:23:32.878670006 +0000 UTC m=+0.021619919 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:23:32 np0005545273 podman[134394]: 2025-12-04 10:23:32.974480332 +0000 UTC m=+0.117430245 container died 9f7d59fbd5eb0086b1ea4b871f3fc8347fed9af4a198f8ed4827968b2deb6750 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_mccarthy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Dec  4 05:23:32 np0005545273 systemd[1]: var-lib-containers-storage-overlay-e0397515e6a037f9a11cb0dd02e85b6cc2b575b6076bfdf5f3e46e4cfbc3a6ba-merged.mount: Deactivated successfully.
Dec  4 05:23:33 np0005545273 podman[134394]: 2025-12-04 10:23:33.012016472 +0000 UTC m=+0.154966385 container remove 9f7d59fbd5eb0086b1ea4b871f3fc8347fed9af4a198f8ed4827968b2deb6750 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec  4 05:23:33 np0005545273 systemd[1]: libpod-conmon-9f7d59fbd5eb0086b1ea4b871f3fc8347fed9af4a198f8ed4827968b2deb6750.scope: Deactivated successfully.
Dec  4 05:23:33 np0005545273 podman[134434]: 2025-12-04 10:23:33.160851741 +0000 UTC m=+0.044772604 container create 2d1f9fa3c043400254adce3b2695b5d106f90767635f153410932b1c145066e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_hertz, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  4 05:23:33 np0005545273 systemd[1]: Started libpod-conmon-2d1f9fa3c043400254adce3b2695b5d106f90767635f153410932b1c145066e3.scope.
Dec  4 05:23:33 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:23:33 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37233d329b714c8bf708086047a2419e537d7b2daf94bd11ac32c1414fd9477f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:23:33 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37233d329b714c8bf708086047a2419e537d7b2daf94bd11ac32c1414fd9477f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:23:33 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37233d329b714c8bf708086047a2419e537d7b2daf94bd11ac32c1414fd9477f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:23:33 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37233d329b714c8bf708086047a2419e537d7b2daf94bd11ac32c1414fd9477f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:23:33 np0005545273 podman[134434]: 2025-12-04 10:23:33.140996254 +0000 UTC m=+0.024917137 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:23:33 np0005545273 podman[134434]: 2025-12-04 10:23:33.237675402 +0000 UTC m=+0.121596425 container init 2d1f9fa3c043400254adce3b2695b5d106f90767635f153410932b1c145066e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_hertz, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Dec  4 05:23:33 np0005545273 podman[134434]: 2025-12-04 10:23:33.245294375 +0000 UTC m=+0.129215228 container start 2d1f9fa3c043400254adce3b2695b5d106f90767635f153410932b1c145066e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:23:33 np0005545273 podman[134434]: 2025-12-04 10:23:33.249236069 +0000 UTC m=+0.133157052 container attach 2d1f9fa3c043400254adce3b2695b5d106f90767635f153410932b1c145066e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  4 05:23:33 np0005545273 systemd-logind[798]: New session 45 of user zuul.
Dec  4 05:23:33 np0005545273 systemd[1]: Started Session 45 of User zuul.
Dec  4 05:23:33 np0005545273 lvm[134631]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:23:33 np0005545273 lvm[134631]: VG ceph_vg0 finished
Dec  4 05:23:33 np0005545273 lvm[134637]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:23:33 np0005545273 lvm[134637]: VG ceph_vg1 finished
Dec  4 05:23:34 np0005545273 lvm[134658]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:23:34 np0005545273 lvm[134658]: VG ceph_vg2 finished
Dec  4 05:23:34 np0005545273 lvm[134668]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:23:34 np0005545273 lvm[134668]: VG ceph_vg1 finished
Dec  4 05:23:34 np0005545273 lvm[134691]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:23:34 np0005545273 lvm[134691]: VG ceph_vg1 finished
Dec  4 05:23:34 np0005545273 practical_hertz[134451]: {}
Dec  4 05:23:34 np0005545273 podman[134434]: 2025-12-04 10:23:34.139556651 +0000 UTC m=+1.023477504 container died 2d1f9fa3c043400254adce3b2695b5d106f90767635f153410932b1c145066e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_hertz, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec  4 05:23:34 np0005545273 systemd[1]: libpod-2d1f9fa3c043400254adce3b2695b5d106f90767635f153410932b1c145066e3.scope: Deactivated successfully.
Dec  4 05:23:34 np0005545273 systemd[1]: libpod-2d1f9fa3c043400254adce3b2695b5d106f90767635f153410932b1c145066e3.scope: Consumed 1.426s CPU time.
Dec  4 05:23:34 np0005545273 systemd[1]: var-lib-containers-storage-overlay-37233d329b714c8bf708086047a2419e537d7b2daf94bd11ac32c1414fd9477f-merged.mount: Deactivated successfully.
Dec  4 05:23:34 np0005545273 podman[134434]: 2025-12-04 10:23:34.186432275 +0000 UTC m=+1.070353128 container remove 2d1f9fa3c043400254adce3b2695b5d106f90767635f153410932b1c145066e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_hertz, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Dec  4 05:23:34 np0005545273 systemd[1]: libpod-conmon-2d1f9fa3c043400254adce3b2695b5d106f90767635f153410932b1c145066e3.scope: Deactivated successfully.
Dec  4 05:23:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:23:34 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:23:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:23:34 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:23:34 np0005545273 python3.9[134692]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:23:34 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v382: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:23:35 np0005545273 python3.9[134883]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:23:35 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:23:35 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:23:35 np0005545273 python3.9[135006]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764843814.4129927-34-197785888702082/.source.conf _original_basename=ceph.conf follow=False checksum=743a744c283201ba2a628c2473976918c65bd541 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:23:36 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:23:36 np0005545273 python3.9[135158]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:23:36 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v383: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:23:36 np0005545273 python3.9[135281]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764843815.8882682-34-280672451107229/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=78fa63d8c69ed08876e15c6d423f4ac4e13914fe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:23:36 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:23:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:23:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:23:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:23:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:23:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:23:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:23:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:23:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:23:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:23:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:23:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:23:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec  4 05:23:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:23:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:23:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:23:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:23:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:23:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 05:23:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:23:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:23:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:23:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 05:23:37 np0005545273 systemd[1]: session-45.scope: Deactivated successfully.
Dec  4 05:23:37 np0005545273 systemd[1]: session-45.scope: Consumed 2.641s CPU time.
Dec  4 05:23:37 np0005545273 systemd-logind[798]: Session 45 logged out. Waiting for processes to exit.
Dec  4 05:23:37 np0005545273 systemd-logind[798]: Removed session 45.
Dec  4 05:23:38 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v384: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:23:40 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v385: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:23:41 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:23:42 np0005545273 systemd-logind[798]: New session 46 of user zuul.
Dec  4 05:23:42 np0005545273 systemd[1]: Started Session 46 of User zuul.
Dec  4 05:23:42 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v386: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:23:43 np0005545273 python3.9[135461]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:23:44 np0005545273 python3.9[135617]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:23:44 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v387: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:23:45 np0005545273 python3.9[135769]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:23:46 np0005545273 python3.9[135919]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:23:46 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:23:46 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v388: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:23:47 np0005545273 python3.9[136071]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec  4 05:23:48 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v389: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:23:48 np0005545273 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Dec  4 05:23:49 np0005545273 python3.9[136228]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  4 05:23:50 np0005545273 python3.9[136312]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  4 05:23:50 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v390: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:23:51 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:23:52 np0005545273 python3.9[136465]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  4 05:23:52 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v391: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:23:53 np0005545273 python3[136620]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec  4 05:23:54 np0005545273 python3.9[136772]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:23:54 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v392: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:23:55 np0005545273 python3.9[136924]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:23:55 np0005545273 python3.9[137002]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:23:56 np0005545273 python3.9[137154]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:23:56 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:23:56 np0005545273 python3.9[137232]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.uzajw5vp recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:23:56 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v393: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:23:57 np0005545273 python3.9[137384]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:23:57 np0005545273 python3.9[137462]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:23:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:23:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:23:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:23:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:23:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:23:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:23:58 np0005545273 python3.9[137615]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:23:58 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v394: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:23:59 np0005545273 python3[137769]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  4 05:23:59 np0005545273 python3.9[137921]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:24:00 np0005545273 python3.9[138046]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843839.3780198-157-202304835720289/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:24:00 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v395: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:24:01 np0005545273 python3.9[138198]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:24:01 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:24:01 np0005545273 python3.9[138323]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843840.738866-172-209200007089401/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:24:02 np0005545273 python3.9[138475]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:24:02 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v396: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:24:03 np0005545273 python3.9[138600]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843841.9995663-187-220923969205040/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:24:03 np0005545273 python3.9[138752]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:24:04 np0005545273 python3.9[138877]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843843.208006-202-128697812620889/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:24:04 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v397: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:24:04 np0005545273 python3.9[139030]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:24:05 np0005545273 python3.9[139155]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764843844.4228766-217-209509027150972/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:24:06 np0005545273 python3.9[139307]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:24:06 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:24:06 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  4 05:24:06 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2048 writes, 9132 keys, 2048 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 2048 writes, 2048 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2048 writes, 9132 keys, 2048 commit groups, 1.0 writes per commit group, ingest: 11.64 MB, 0.02 MB/s#012Interval WAL: 2048 writes, 2048 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     92.1      0.09              0.02         3    0.031       0      0       0.0       0.0#012  L6      1/0    6.56 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6     88.8     78.1      0.18              0.04         2    0.088    7244    737       0.0       0.0#012 Sum      1/0    6.56 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     58.4     82.9      0.27              0.06         5    0.053    7244    737       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     59.2     84.0      0.26              0.06         4    0.066    7244    737       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     88.8     78.1      0.18              0.04         2    0.088    7244    737       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     95.6      0.09              0.02         2    0.044       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.008, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.3 seconds#012Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56349f89b8d0#2 capacity: 308.00 MB usage: 709.38 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 7.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(39,621.84 KB,0.197165%) FilterBlock(6,28.61 KB,0.00907105%) IndexBlock(6,58.92 KB,0.0186821%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  4 05:24:06 np0005545273 python3.9[139459]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:24:06 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v398: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:24:07 np0005545273 python3.9[139614]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:24:08 np0005545273 python3.9[139766]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:24:08 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v399: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:24:09 np0005545273 python3.9[139921]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:24:09 np0005545273 python3.9[140075]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:24:10 np0005545273 python3.9[140230]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:24:10 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v400: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:24:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:24:11 np0005545273 python3.9[140380]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:24:12 np0005545273 python3.9[140533]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:f2:93:49:d5" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:24:12 np0005545273 ovs-vsctl[140534]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:f2:93:49:d5 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec  4 05:24:12 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v401: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:24:13 np0005545273 python3.9[140686]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:24:14 np0005545273 python3.9[140841]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:24:14 np0005545273 ovs-vsctl[140842]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Dec  4 05:24:14 np0005545273 python3.9[140992]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:24:14 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v402: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:24:15 np0005545273 python3.9[141146]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:24:16 np0005545273 python3.9[141298]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:24:16 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:24:16 np0005545273 python3.9[141376]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:24:16 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v403: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:24:17 np0005545273 python3.9[141528]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:24:17 np0005545273 python3.9[141606]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:24:18 np0005545273 python3.9[141758]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:24:18 np0005545273 python3.9[141910]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:24:18 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v404: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:24:19 np0005545273 python3.9[141988]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:24:19 np0005545273 python3.9[142140]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:24:20 np0005545273 python3.9[142218]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:24:20 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v405: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:24:21 np0005545273 python3.9[142370]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:24:21 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:24:21 np0005545273 systemd[1]: Reloading.
Dec  4 05:24:21 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:24:21 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:24:22 np0005545273 python3.9[142561]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:24:22 np0005545273 python3.9[142639]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:24:22 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v406: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:24:23 np0005545273 python3.9[142791]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:24:24 np0005545273 python3.9[142869]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:24:24 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v407: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:24:24 np0005545273 python3.9[143021]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:24:24 np0005545273 systemd[1]: Reloading.
Dec  4 05:24:24 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:24:24 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:24:25 np0005545273 systemd[1]: Starting Create netns directory...
Dec  4 05:24:25 np0005545273 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  4 05:24:25 np0005545273 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  4 05:24:25 np0005545273 systemd[1]: Finished Create netns directory.
Dec  4 05:24:26 np0005545273 python3.9[143216]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:24:26 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:24:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:24:26
Dec  4 05:24:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:24:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:24:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'default.rgw.log', 'volumes', 'backups', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control']
Dec  4 05:24:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:24:26 np0005545273 python3.9[143368]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:24:26 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v408: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:24:27 np0005545273 python3.9[143491]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764843866.3751552-468-216481039494650/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:24:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:24:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:24:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:24:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:24:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:24:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:24:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:24:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:24:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:24:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:24:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:24:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:24:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:24:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:24:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:24:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:24:28 np0005545273 python3.9[143643]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:24:28 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v409: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:24:28 np0005545273 python3.9[143797]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:24:29 np0005545273 python3.9[143920]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764843868.5357344-493-8178310881122/.source.json _original_basename=.mr03u9_e follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:24:30 np0005545273 python3.9[144072]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:24:30 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v410: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:24:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:24:32 np0005545273 python3.9[144499]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec  4 05:24:32 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v411: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:24:33 np0005545273 python3.9[144651]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  4 05:24:33 np0005545273 python3.9[144803]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  4 05:24:34 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v412: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:24:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:24:34 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:24:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:24:34 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:24:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:24:34 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:24:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:24:34 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:24:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:24:34 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:24:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:24:34 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:24:34 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:24:34 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:24:34 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:24:34 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Dec  4 05:24:34 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:24:34.977263) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  4 05:24:34 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Dec  4 05:24:34 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843874977306, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 795, "num_deletes": 251, "total_data_size": 1056740, "memory_usage": 1071376, "flush_reason": "Manual Compaction"}
Dec  4 05:24:34 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Dec  4 05:24:34 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843874984582, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 1047298, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8959, "largest_seqno": 9753, "table_properties": {"data_size": 1043224, "index_size": 1790, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 8637, "raw_average_key_size": 18, "raw_value_size": 1035130, "raw_average_value_size": 2235, "num_data_blocks": 83, "num_entries": 463, "num_filter_entries": 463, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843807, "oldest_key_time": 1764843807, "file_creation_time": 1764843874, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:24:34 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 7363 microseconds, and 3689 cpu microseconds.
Dec  4 05:24:34 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  4 05:24:34 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:24:34.984624) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 1047298 bytes OK
Dec  4 05:24:34 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:24:34.984651) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Dec  4 05:24:34 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:24:34.985875) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Dec  4 05:24:34 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:24:34.985889) EVENT_LOG_v1 {"time_micros": 1764843874985884, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  4 05:24:34 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:24:34.985909) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  4 05:24:34 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1052748, prev total WAL file size 1052748, number of live WAL files 2.
Dec  4 05:24:34 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:24:34 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:24:34.986432) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Dec  4 05:24:34 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  4 05:24:34 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(1022KB)], [23(6722KB)]
Dec  4 05:24:34 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843874986475, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 7930809, "oldest_snapshot_seqno": -1}
Dec  4 05:24:35 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3346 keys, 6230652 bytes, temperature: kUnknown
Dec  4 05:24:35 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843875024219, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6230652, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6206440, "index_size": 14759, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8389, "raw_key_size": 81192, "raw_average_key_size": 24, "raw_value_size": 6144070, "raw_average_value_size": 1836, "num_data_blocks": 644, "num_entries": 3346, "num_filter_entries": 3346, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 1764843874, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:24:35 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  4 05:24:35 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:24:35.024433) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6230652 bytes
Dec  4 05:24:35 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:24:35.026077) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 209.7 rd, 164.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 6.6 +0.0 blob) out(5.9 +0.0 blob), read-write-amplify(13.5) write-amplify(5.9) OK, records in: 3860, records dropped: 514 output_compression: NoCompression
Dec  4 05:24:35 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:24:35.026114) EVENT_LOG_v1 {"time_micros": 1764843875026087, "job": 8, "event": "compaction_finished", "compaction_time_micros": 37820, "compaction_time_cpu_micros": 15519, "output_level": 6, "num_output_files": 1, "total_output_size": 6230652, "num_input_records": 3860, "num_output_records": 3346, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  4 05:24:35 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:24:35 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843875026364, "job": 8, "event": "table_file_deletion", "file_number": 25}
Dec  4 05:24:35 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:24:35 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764843875027471, "job": 8, "event": "table_file_deletion", "file_number": 23}
Dec  4 05:24:35 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:24:34.986389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:24:35 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:24:35.027516) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:24:35 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:24:35.027520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:24:35 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:24:35.027522) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:24:35 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:24:35.027523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:24:35 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:24:35.027524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:24:35 np0005545273 podman[145124]: 2025-12-04 10:24:35.322263446 +0000 UTC m=+0.040039669 container create e14b690ac9718afb774b5c2e378076953fb1fb2bf544fec1aac2ac7168abd293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:24:35 np0005545273 systemd[1]: Started libpod-conmon-e14b690ac9718afb774b5c2e378076953fb1fb2bf544fec1aac2ac7168abd293.scope.
Dec  4 05:24:35 np0005545273 python3[145110]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  4 05:24:35 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:24:35 np0005545273 podman[145124]: 2025-12-04 10:24:35.303069785 +0000 UTC m=+0.020846038 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:24:35 np0005545273 podman[145124]: 2025-12-04 10:24:35.403979967 +0000 UTC m=+0.121756240 container init e14b690ac9718afb774b5c2e378076953fb1fb2bf544fec1aac2ac7168abd293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_austin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  4 05:24:35 np0005545273 podman[145124]: 2025-12-04 10:24:35.411571257 +0000 UTC m=+0.129347490 container start e14b690ac9718afb774b5c2e378076953fb1fb2bf544fec1aac2ac7168abd293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_austin, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:24:35 np0005545273 podman[145124]: 2025-12-04 10:24:35.415570828 +0000 UTC m=+0.133347071 container attach e14b690ac9718afb774b5c2e378076953fb1fb2bf544fec1aac2ac7168abd293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_austin, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:24:35 np0005545273 pensive_austin[145140]: 167 167
Dec  4 05:24:35 np0005545273 systemd[1]: libpod-e14b690ac9718afb774b5c2e378076953fb1fb2bf544fec1aac2ac7168abd293.scope: Deactivated successfully.
Dec  4 05:24:35 np0005545273 podman[145124]: 2025-12-04 10:24:35.418652553 +0000 UTC m=+0.136428806 container died e14b690ac9718afb774b5c2e378076953fb1fb2bf544fec1aac2ac7168abd293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec  4 05:24:35 np0005545273 systemd[1]: var-lib-containers-storage-overlay-2b170f13be015f36d6b66baea7c47217b5be1ed9ba9a4927987c11524e371ec6-merged.mount: Deactivated successfully.
Dec  4 05:24:35 np0005545273 podman[145124]: 2025-12-04 10:24:35.462976049 +0000 UTC m=+0.180752302 container remove e14b690ac9718afb774b5c2e378076953fb1fb2bf544fec1aac2ac7168abd293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_austin, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec  4 05:24:35 np0005545273 systemd[1]: libpod-conmon-e14b690ac9718afb774b5c2e378076953fb1fb2bf544fec1aac2ac7168abd293.scope: Deactivated successfully.
Dec  4 05:24:35 np0005545273 podman[145187]: 2025-12-04 10:24:35.634875476 +0000 UTC m=+0.040370128 container create 21c9098e5528e84d2726bd984eaf9135cb1fd6c73b69a8769a7f33dbd48c6ead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_swartz, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:24:35 np0005545273 systemd[1]: Started libpod-conmon-21c9098e5528e84d2726bd984eaf9135cb1fd6c73b69a8769a7f33dbd48c6ead.scope.
Dec  4 05:24:35 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:24:35 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51e42ff370a5f6a136d39a5d72ae3e6919f2d0c6157fa9dd07b3c842dce777af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:24:35 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51e42ff370a5f6a136d39a5d72ae3e6919f2d0c6157fa9dd07b3c842dce777af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:24:35 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51e42ff370a5f6a136d39a5d72ae3e6919f2d0c6157fa9dd07b3c842dce777af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:24:35 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51e42ff370a5f6a136d39a5d72ae3e6919f2d0c6157fa9dd07b3c842dce777af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:24:35 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51e42ff370a5f6a136d39a5d72ae3e6919f2d0c6157fa9dd07b3c842dce777af/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:24:35 np0005545273 podman[145187]: 2025-12-04 10:24:35.619716176 +0000 UTC m=+0.025210828 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:24:35 np0005545273 podman[145187]: 2025-12-04 10:24:35.723999722 +0000 UTC m=+0.129494404 container init 21c9098e5528e84d2726bd984eaf9135cb1fd6c73b69a8769a7f33dbd48c6ead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_swartz, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:24:35 np0005545273 podman[145187]: 2025-12-04 10:24:35.731428577 +0000 UTC m=+0.136923229 container start 21c9098e5528e84d2726bd984eaf9135cb1fd6c73b69a8769a7f33dbd48c6ead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_swartz, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:24:35 np0005545273 podman[145187]: 2025-12-04 10:24:35.734901104 +0000 UTC m=+0.140395756 container attach 21c9098e5528e84d2726bd984eaf9135cb1fd6c73b69a8769a7f33dbd48c6ead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec  4 05:24:36 np0005545273 sweet_swartz[145202]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:24:36 np0005545273 sweet_swartz[145202]: --> All data devices are unavailable
Dec  4 05:24:36 np0005545273 systemd[1]: libpod-21c9098e5528e84d2726bd984eaf9135cb1fd6c73b69a8769a7f33dbd48c6ead.scope: Deactivated successfully.
Dec  4 05:24:36 np0005545273 podman[145241]: 2025-12-04 10:24:36.27362964 +0000 UTC m=+0.031107882 container died 21c9098e5528e84d2726bd984eaf9135cb1fd6c73b69a8769a7f33dbd48c6ead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_swartz, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:24:36 np0005545273 systemd[1]: var-lib-containers-storage-overlay-51e42ff370a5f6a136d39a5d72ae3e6919f2d0c6157fa9dd07b3c842dce777af-merged.mount: Deactivated successfully.
Dec  4 05:24:36 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:24:36 np0005545273 podman[145241]: 2025-12-04 10:24:36.315382655 +0000 UTC m=+0.072860877 container remove 21c9098e5528e84d2726bd984eaf9135cb1fd6c73b69a8769a7f33dbd48c6ead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_swartz, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:24:36 np0005545273 systemd[1]: libpod-conmon-21c9098e5528e84d2726bd984eaf9135cb1fd6c73b69a8769a7f33dbd48c6ead.scope: Deactivated successfully.
Dec  4 05:24:36 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v413: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:24:36 np0005545273 podman[145316]: 2025-12-04 10:24:36.894031918 +0000 UTC m=+0.023717608 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:24:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:24:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:24:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:24:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:24:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:24:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:24:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:24:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:24:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:24:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:24:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:24:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:24:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec  4 05:24:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:24:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:24:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:24:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:24:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:24:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 05:24:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:24:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:24:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:24:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 05:24:37 np0005545273 podman[145316]: 2025-12-04 10:24:37.263466729 +0000 UTC m=+0.393152389 container create 177ad5e2a9d9e15f7a022afef713b95664b972fabd90c6c537546a051cd9daca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_shockley, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:24:37 np0005545273 systemd[1]: Started libpod-conmon-177ad5e2a9d9e15f7a022afef713b95664b972fabd90c6c537546a051cd9daca.scope.
Dec  4 05:24:37 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:24:37 np0005545273 podman[145316]: 2025-12-04 10:24:37.532534555 +0000 UTC m=+0.662220225 container init 177ad5e2a9d9e15f7a022afef713b95664b972fabd90c6c537546a051cd9daca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_shockley, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:24:37 np0005545273 podman[145316]: 2025-12-04 10:24:37.541133043 +0000 UTC m=+0.670818713 container start 177ad5e2a9d9e15f7a022afef713b95664b972fabd90c6c537546a051cd9daca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  4 05:24:37 np0005545273 pensive_shockley[145342]: 167 167
Dec  4 05:24:37 np0005545273 systemd[1]: libpod-177ad5e2a9d9e15f7a022afef713b95664b972fabd90c6c537546a051cd9daca.scope: Deactivated successfully.
Dec  4 05:24:37 np0005545273 podman[145316]: 2025-12-04 10:24:37.548323581 +0000 UTC m=+0.678009241 container attach 177ad5e2a9d9e15f7a022afef713b95664b972fabd90c6c537546a051cd9daca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:24:37 np0005545273 podman[145316]: 2025-12-04 10:24:37.549043261 +0000 UTC m=+0.678728911 container died 177ad5e2a9d9e15f7a022afef713b95664b972fabd90c6c537546a051cd9daca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:24:38 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v414: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:24:40 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v415: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:24:41 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:24:41 np0005545273 systemd[1]: var-lib-containers-storage-overlay-720dc0ce7c3cdb0bac0a1fe2041e5e8e54c0e91df6c89140418f81c94342ba4e-merged.mount: Deactivated successfully.
Dec  4 05:24:41 np0005545273 podman[145316]: 2025-12-04 10:24:41.762267962 +0000 UTC m=+4.891953622 container remove 177ad5e2a9d9e15f7a022afef713b95664b972fabd90c6c537546a051cd9daca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec  4 05:24:41 np0005545273 systemd[1]: libpod-conmon-177ad5e2a9d9e15f7a022afef713b95664b972fabd90c6c537546a051cd9daca.scope: Deactivated successfully.
Dec  4 05:24:41 np0005545273 podman[145157]: 2025-12-04 10:24:41.811739341 +0000 UTC m=+6.381793686 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  4 05:24:41 np0005545273 podman[145443]: 2025-12-04 10:24:41.958298216 +0000 UTC m=+0.055523647 container create c4f50136d1b896015febe930211903c2d14c1d77e5e9288a5dcbe8eeb1521f64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec  4 05:24:41 np0005545273 podman[145445]: 2025-12-04 10:24:41.976815159 +0000 UTC m=+0.062981644 container create 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  4 05:24:41 np0005545273 podman[145445]: 2025-12-04 10:24:41.944471484 +0000 UTC m=+0.030638019 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  4 05:24:41 np0005545273 python3[145110]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  4 05:24:42 np0005545273 systemd[1]: Started libpod-conmon-c4f50136d1b896015febe930211903c2d14c1d77e5e9288a5dcbe8eeb1521f64.scope.
Dec  4 05:24:42 np0005545273 podman[145443]: 2025-12-04 10:24:41.928012888 +0000 UTC m=+0.025238369 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:24:42 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:24:42 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a449596c4c15ea71643c2f1749fe8d46dc2dda0479ff5ff972a53f08c8e1de2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:24:42 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a449596c4c15ea71643c2f1749fe8d46dc2dda0479ff5ff972a53f08c8e1de2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:24:42 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a449596c4c15ea71643c2f1749fe8d46dc2dda0479ff5ff972a53f08c8e1de2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:24:42 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a449596c4c15ea71643c2f1749fe8d46dc2dda0479ff5ff972a53f08c8e1de2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:24:42 np0005545273 podman[145443]: 2025-12-04 10:24:42.057282135 +0000 UTC m=+0.154507556 container init c4f50136d1b896015febe930211903c2d14c1d77e5e9288a5dcbe8eeb1521f64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_lamarr, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default)
Dec  4 05:24:42 np0005545273 podman[145443]: 2025-12-04 10:24:42.06756395 +0000 UTC m=+0.164789361 container start c4f50136d1b896015febe930211903c2d14c1d77e5e9288a5dcbe8eeb1521f64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_lamarr, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:24:42 np0005545273 podman[145443]: 2025-12-04 10:24:42.070625684 +0000 UTC m=+0.167851095 container attach c4f50136d1b896015febe930211903c2d14c1d77e5e9288a5dcbe8eeb1521f64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_lamarr, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]: {
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:    "0": [
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:        {
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            "devices": [
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "/dev/loop3"
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            ],
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            "lv_name": "ceph_lv0",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            "lv_size": "21470642176",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            "name": "ceph_lv0",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            "tags": {
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.cluster_name": "ceph",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.crush_device_class": "",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.encrypted": "0",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.objectstore": "bluestore",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.osd_id": "0",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.type": "block",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.vdo": "0",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.with_tpm": "0"
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            },
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            "type": "block",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            "vg_name": "ceph_vg0"
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:        }
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:    ],
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:    "1": [
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:        {
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            "devices": [
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "/dev/loop4"
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            ],
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            "lv_name": "ceph_lv1",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            "lv_size": "21470642176",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            "name": "ceph_lv1",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            "tags": {
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.cluster_name": "ceph",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.crush_device_class": "",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.encrypted": "0",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.objectstore": "bluestore",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.osd_id": "1",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.type": "block",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.vdo": "0",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.with_tpm": "0"
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            },
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            "type": "block",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            "vg_name": "ceph_vg1"
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:        }
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:    ],
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:    "2": [
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:        {
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            "devices": [
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "/dev/loop5"
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            ],
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            "lv_name": "ceph_lv2",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            "lv_size": "21470642176",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            "name": "ceph_lv2",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            "tags": {
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.cluster_name": "ceph",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.crush_device_class": "",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.encrypted": "0",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.objectstore": "bluestore",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.osd_id": "2",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.type": "block",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.vdo": "0",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:                "ceph.with_tpm": "0"
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            },
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            "type": "block",
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:            "vg_name": "ceph_vg2"
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:        }
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]:    ]
Dec  4 05:24:42 np0005545273 wizardly_lamarr[145478]: }
Dec  4 05:24:42 np0005545273 systemd[1]: libpod-c4f50136d1b896015febe930211903c2d14c1d77e5e9288a5dcbe8eeb1521f64.scope: Deactivated successfully.
Dec  4 05:24:42 np0005545273 podman[145443]: 2025-12-04 10:24:42.424145406 +0000 UTC m=+0.521370837 container died c4f50136d1b896015febe930211903c2d14c1d77e5e9288a5dcbe8eeb1521f64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_lamarr, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec  4 05:24:42 np0005545273 systemd[1]: var-lib-containers-storage-overlay-3a449596c4c15ea71643c2f1749fe8d46dc2dda0479ff5ff972a53f08c8e1de2-merged.mount: Deactivated successfully.
Dec  4 05:24:42 np0005545273 podman[145443]: 2025-12-04 10:24:42.471029264 +0000 UTC m=+0.568254685 container remove c4f50136d1b896015febe930211903c2d14c1d77e5e9288a5dcbe8eeb1521f64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  4 05:24:42 np0005545273 systemd[1]: libpod-conmon-c4f50136d1b896015febe930211903c2d14c1d77e5e9288a5dcbe8eeb1521f64.scope: Deactivated successfully.
Dec  4 05:24:42 np0005545273 python3.9[145720]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:24:42 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v416: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:24:43 np0005545273 podman[145736]: 2025-12-04 10:24:43.014931513 +0000 UTC m=+0.049757397 container create c4e2259d81c240930ef551a75147f309bddfec5ef59bdd90e1a46a00fb693909 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_montalcini, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  4 05:24:43 np0005545273 systemd[1]: Started libpod-conmon-c4e2259d81c240930ef551a75147f309bddfec5ef59bdd90e1a46a00fb693909.scope.
Dec  4 05:24:43 np0005545273 podman[145736]: 2025-12-04 10:24:42.997893682 +0000 UTC m=+0.032719566 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:24:43 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:24:43 np0005545273 podman[145736]: 2025-12-04 10:24:43.117853212 +0000 UTC m=+0.152679116 container init c4e2259d81c240930ef551a75147f309bddfec5ef59bdd90e1a46a00fb693909 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:24:43 np0005545273 podman[145736]: 2025-12-04 10:24:43.126181642 +0000 UTC m=+0.161007536 container start c4e2259d81c240930ef551a75147f309bddfec5ef59bdd90e1a46a00fb693909 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_montalcini, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:24:43 np0005545273 unruffled_montalcini[145776]: 167 167
Dec  4 05:24:43 np0005545273 podman[145736]: 2025-12-04 10:24:43.131327444 +0000 UTC m=+0.166153358 container attach c4e2259d81c240930ef551a75147f309bddfec5ef59bdd90e1a46a00fb693909 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_montalcini, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:24:43 np0005545273 systemd[1]: libpod-c4e2259d81c240930ef551a75147f309bddfec5ef59bdd90e1a46a00fb693909.scope: Deactivated successfully.
Dec  4 05:24:43 np0005545273 podman[145736]: 2025-12-04 10:24:43.133135054 +0000 UTC m=+0.167960948 container died c4e2259d81c240930ef551a75147f309bddfec5ef59bdd90e1a46a00fb693909 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_montalcini, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:24:43 np0005545273 systemd[1]: var-lib-containers-storage-overlay-70cbf0c1439b02d228ea08d7b0ec5c911f51884f31523b683a3cc38f01db497b-merged.mount: Deactivated successfully.
Dec  4 05:24:43 np0005545273 podman[145736]: 2025-12-04 10:24:43.186054488 +0000 UTC m=+0.220880382 container remove c4e2259d81c240930ef551a75147f309bddfec5ef59bdd90e1a46a00fb693909 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:24:43 np0005545273 systemd[1]: libpod-conmon-c4e2259d81c240930ef551a75147f309bddfec5ef59bdd90e1a46a00fb693909.scope: Deactivated successfully.
Dec  4 05:24:43 np0005545273 podman[145877]: 2025-12-04 10:24:43.377261179 +0000 UTC m=+0.051697582 container create 64461f6edf724cb2856212f4450139fe77f1cf5f455d63c9a50b70a31525f040 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  4 05:24:43 np0005545273 systemd[1]: Started libpod-conmon-64461f6edf724cb2856212f4450139fe77f1cf5f455d63c9a50b70a31525f040.scope.
Dec  4 05:24:43 np0005545273 podman[145877]: 2025-12-04 10:24:43.352223746 +0000 UTC m=+0.026660199 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:24:43 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:24:43 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b28d5543fbae154dc398eb0f086aee6a0f75897ba5c78f4d09cafec7ae2e41bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:24:43 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b28d5543fbae154dc398eb0f086aee6a0f75897ba5c78f4d09cafec7ae2e41bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:24:43 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b28d5543fbae154dc398eb0f086aee6a0f75897ba5c78f4d09cafec7ae2e41bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:24:43 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b28d5543fbae154dc398eb0f086aee6a0f75897ba5c78f4d09cafec7ae2e41bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:24:43 np0005545273 podman[145877]: 2025-12-04 10:24:43.471768164 +0000 UTC m=+0.146204567 container init 64461f6edf724cb2856212f4450139fe77f1cf5f455d63c9a50b70a31525f040 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_varahamihira, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  4 05:24:43 np0005545273 podman[145877]: 2025-12-04 10:24:43.481087312 +0000 UTC m=+0.155523715 container start 64461f6edf724cb2856212f4450139fe77f1cf5f455d63c9a50b70a31525f040 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:24:43 np0005545273 podman[145877]: 2025-12-04 10:24:43.484336512 +0000 UTC m=+0.158772915 container attach 64461f6edf724cb2856212f4450139fe77f1cf5f455d63c9a50b70a31525f040 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_varahamihira, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Dec  4 05:24:43 np0005545273 python3.9[145951]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:24:44 np0005545273 python3.9[146054]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:24:44 np0005545273 lvm[146125]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:24:44 np0005545273 lvm[146125]: VG ceph_vg0 finished
Dec  4 05:24:44 np0005545273 lvm[146126]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:24:44 np0005545273 lvm[146126]: VG ceph_vg1 finished
Dec  4 05:24:44 np0005545273 lvm[146133]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:24:44 np0005545273 lvm[146133]: VG ceph_vg2 finished
Dec  4 05:24:44 np0005545273 quirky_varahamihira[145918]: {}
Dec  4 05:24:44 np0005545273 systemd[1]: libpod-64461f6edf724cb2856212f4450139fe77f1cf5f455d63c9a50b70a31525f040.scope: Deactivated successfully.
Dec  4 05:24:44 np0005545273 systemd[1]: libpod-64461f6edf724cb2856212f4450139fe77f1cf5f455d63c9a50b70a31525f040.scope: Consumed 1.426s CPU time.
Dec  4 05:24:44 np0005545273 podman[145877]: 2025-12-04 10:24:44.33683042 +0000 UTC m=+1.011266873 container died 64461f6edf724cb2856212f4450139fe77f1cf5f455d63c9a50b70a31525f040 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_varahamihira, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:24:44 np0005545273 systemd[1]: var-lib-containers-storage-overlay-b28d5543fbae154dc398eb0f086aee6a0f75897ba5c78f4d09cafec7ae2e41bd-merged.mount: Deactivated successfully.
Dec  4 05:24:44 np0005545273 podman[145877]: 2025-12-04 10:24:44.383537883 +0000 UTC m=+1.057974286 container remove 64461f6edf724cb2856212f4450139fe77f1cf5f455d63c9a50b70a31525f040 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_varahamihira, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:24:44 np0005545273 systemd[1]: libpod-conmon-64461f6edf724cb2856212f4450139fe77f1cf5f455d63c9a50b70a31525f040.scope: Deactivated successfully.
Dec  4 05:24:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:24:44 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:24:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:24:44 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:24:44 np0005545273 python3.9[146295]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764843884.1837006-581-123452624321607/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:24:44 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v417: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:24:45 np0005545273 python3.9[146371]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  4 05:24:45 np0005545273 systemd[1]: Reloading.
Dec  4 05:24:45 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:24:45 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:24:45 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:24:45 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:24:46 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:24:46 np0005545273 python3.9[146483]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:24:46 np0005545273 systemd[1]: Reloading.
Dec  4 05:24:46 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:24:46 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:24:46 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v418: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:24:47 np0005545273 systemd[1]: Starting ovn_controller container...
Dec  4 05:24:48 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:24:48 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb152d2b6b97a8dcfc4acfb3d286a54e858d1d391f4d3ca6b50f427eb3899b84/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec  4 05:24:48 np0005545273 systemd[1]: Started /usr/bin/podman healthcheck run 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06.
Dec  4 05:24:48 np0005545273 podman[146523]: 2025-12-04 10:24:48.283951317 +0000 UTC m=+0.344000509 container init 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: + sudo -E kolla_set_configs
Dec  4 05:24:48 np0005545273 podman[146523]: 2025-12-04 10:24:48.320224391 +0000 UTC m=+0.380273573 container start 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  4 05:24:48 np0005545273 edpm-start-podman-container[146523]: ovn_controller
Dec  4 05:24:48 np0005545273 systemd[1]: Created slice User Slice of UID 0.
Dec  4 05:24:48 np0005545273 systemd[1]: Starting User Runtime Directory /run/user/0...
Dec  4 05:24:48 np0005545273 edpm-start-podman-container[146522]: Creating additional drop-in dependency for "ovn_controller" (0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06)
Dec  4 05:24:48 np0005545273 systemd[1]: Finished User Runtime Directory /run/user/0.
Dec  4 05:24:48 np0005545273 systemd[1]: Starting User Manager for UID 0...
Dec  4 05:24:48 np0005545273 systemd[1]: Reloading.
Dec  4 05:24:48 np0005545273 podman[146545]: 2025-12-04 10:24:48.439374568 +0000 UTC m=+0.095373270 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Dec  4 05:24:48 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:24:48 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:24:48 np0005545273 systemd[146577]: Queued start job for default target Main User Target.
Dec  4 05:24:48 np0005545273 systemd[146577]: Created slice User Application Slice.
Dec  4 05:24:48 np0005545273 systemd[146577]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec  4 05:24:48 np0005545273 systemd[146577]: Started Daily Cleanup of User's Temporary Directories.
Dec  4 05:24:48 np0005545273 systemd[146577]: Reached target Paths.
Dec  4 05:24:48 np0005545273 systemd[146577]: Reached target Timers.
Dec  4 05:24:48 np0005545273 systemd[146577]: Starting D-Bus User Message Bus Socket...
Dec  4 05:24:48 np0005545273 systemd[146577]: Starting Create User's Volatile Files and Directories...
Dec  4 05:24:48 np0005545273 systemd[146577]: Listening on D-Bus User Message Bus Socket.
Dec  4 05:24:48 np0005545273 systemd[146577]: Reached target Sockets.
Dec  4 05:24:48 np0005545273 systemd[146577]: Finished Create User's Volatile Files and Directories.
Dec  4 05:24:48 np0005545273 systemd[146577]: Reached target Basic System.
Dec  4 05:24:48 np0005545273 systemd[146577]: Reached target Main User Target.
Dec  4 05:24:48 np0005545273 systemd[146577]: Startup finished in 165ms.
Dec  4 05:24:48 np0005545273 systemd[1]: Started User Manager for UID 0.
Dec  4 05:24:48 np0005545273 systemd[1]: Started ovn_controller container.
Dec  4 05:24:48 np0005545273 systemd[1]: 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06-3769d24cd29046b4.service: Main process exited, code=exited, status=1/FAILURE
Dec  4 05:24:48 np0005545273 systemd[1]: 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06-3769d24cd29046b4.service: Failed with result 'exit-code'.
Dec  4 05:24:48 np0005545273 systemd[1]: Started Session c1 of User root.
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: INFO:__main__:Validating config file
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: INFO:__main__:Writing out command to execute
Dec  4 05:24:48 np0005545273 systemd[1]: session-c1.scope: Deactivated successfully.
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: ++ cat /run_command
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: + ARGS=
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: + sudo kolla_copy_cacerts
Dec  4 05:24:48 np0005545273 systemd[1]: Started Session c2 of User root.
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: + [[ ! -n '' ]]
Dec  4 05:24:48 np0005545273 systemd[1]: session-c2.scope: Deactivated successfully.
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: + . kolla_extend_start
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: + umask 0022
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Dec  4 05:24:48 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v419: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: 2025-12-04T10:24:48Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: 2025-12-04T10:24:48Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: 2025-12-04T10:24:48Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: 2025-12-04T10:24:48Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: 2025-12-04T10:24:48Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: 2025-12-04T10:24:48Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Dec  4 05:24:48 np0005545273 NetworkManager[49155]: <info>  [1764843888.9274] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Dec  4 05:24:48 np0005545273 NetworkManager[49155]: <info>  [1764843888.9285] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  4 05:24:48 np0005545273 NetworkManager[49155]: <info>  [1764843888.9298] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Dec  4 05:24:48 np0005545273 NetworkManager[49155]: <info>  [1764843888.9304] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Dec  4 05:24:48 np0005545273 NetworkManager[49155]: <info>  [1764843888.9308] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  4 05:24:48 np0005545273 kernel: br-int: entered promiscuous mode
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: 2025-12-04T10:24:48Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: 2025-12-04T10:24:48Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: 2025-12-04T10:24:48Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: 2025-12-04T10:24:48Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: 2025-12-04T10:24:48Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: 2025-12-04T10:24:48Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: 2025-12-04T10:24:48Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: 2025-12-04T10:24:48Z|00014|main|INFO|OVS feature set changed, force recompute.
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: 2025-12-04T10:24:48Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: 2025-12-04T10:24:48Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: 2025-12-04T10:24:48Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: 2025-12-04T10:24:48Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: 2025-12-04T10:24:48Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: 2025-12-04T10:24:48Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: 2025-12-04T10:24:48Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: 2025-12-04T10:24:48Z|00022|main|INFO|OVS feature set changed, force recompute.
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: 2025-12-04T10:24:48Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: 2025-12-04T10:24:48Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: 2025-12-04T10:24:48Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: 2025-12-04T10:24:48Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: 2025-12-04T10:24:48Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: 2025-12-04T10:24:48Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: 2025-12-04T10:24:48Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  4 05:24:48 np0005545273 ovn_controller[146538]: 2025-12-04T10:24:48Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  4 05:24:48 np0005545273 NetworkManager[49155]: <info>  [1764843888.9562] manager: (ovn-bb8252-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Dec  4 05:24:48 np0005545273 systemd-udevd[146696]: Network interface NamePolicy= disabled on kernel command line.
Dec  4 05:24:48 np0005545273 kernel: genev_sys_6081: entered promiscuous mode
Dec  4 05:24:48 np0005545273 systemd-udevd[146698]: Network interface NamePolicy= disabled on kernel command line.
Dec  4 05:24:48 np0005545273 NetworkManager[49155]: <info>  [1764843888.9798] device (genev_sys_6081): carrier: link connected
Dec  4 05:24:48 np0005545273 NetworkManager[49155]: <info>  [1764843888.9803] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Dec  4 05:24:49 np0005545273 python3.9[146804]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:24:49 np0005545273 ovs-vsctl[146805]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec  4 05:24:50 np0005545273 python3.9[146957]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:24:50 np0005545273 ovs-vsctl[146959]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec  4 05:24:50 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v420: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:24:51 np0005545273 python3.9[147112]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:24:51 np0005545273 ovs-vsctl[147113]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec  4 05:24:51 np0005545273 systemd[1]: session-46.scope: Deactivated successfully.
Dec  4 05:24:51 np0005545273 systemd[1]: session-46.scope: Consumed 58.170s CPU time.
Dec  4 05:24:51 np0005545273 systemd-logind[798]: Session 46 logged out. Waiting for processes to exit.
Dec  4 05:24:51 np0005545273 systemd-logind[798]: Removed session 46.
Dec  4 05:24:51 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:24:52 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v421: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:24:54 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v422: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:24:56 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:24:56 np0005545273 systemd-logind[798]: New session 48 of user zuul.
Dec  4 05:24:56 np0005545273 systemd[1]: Started Session 48 of User zuul.
Dec  4 05:24:56 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v423: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:24:57 np0005545273 python3.9[147293]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:24:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:24:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:24:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:24:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:24:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:24:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:24:58 np0005545273 python3.9[147449]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:24:58 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v424: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:24:59 np0005545273 systemd[1]: Stopping User Manager for UID 0...
Dec  4 05:24:59 np0005545273 systemd[146577]: Activating special unit Exit the Session...
Dec  4 05:24:59 np0005545273 systemd[146577]: Stopped target Main User Target.
Dec  4 05:24:59 np0005545273 systemd[146577]: Stopped target Basic System.
Dec  4 05:24:59 np0005545273 systemd[146577]: Stopped target Paths.
Dec  4 05:24:59 np0005545273 systemd[146577]: Stopped target Sockets.
Dec  4 05:24:59 np0005545273 systemd[146577]: Stopped target Timers.
Dec  4 05:24:59 np0005545273 systemd[146577]: Stopped Daily Cleanup of User's Temporary Directories.
Dec  4 05:24:59 np0005545273 systemd[146577]: Closed D-Bus User Message Bus Socket.
Dec  4 05:24:59 np0005545273 systemd[146577]: Stopped Create User's Volatile Files and Directories.
Dec  4 05:24:59 np0005545273 systemd[146577]: Removed slice User Application Slice.
Dec  4 05:24:59 np0005545273 systemd[146577]: Reached target Shutdown.
Dec  4 05:24:59 np0005545273 systemd[146577]: Finished Exit the Session.
Dec  4 05:24:59 np0005545273 systemd[146577]: Reached target Exit the Session.
Dec  4 05:24:59 np0005545273 systemd[1]: user@0.service: Deactivated successfully.
Dec  4 05:24:59 np0005545273 systemd[1]: Stopped User Manager for UID 0.
Dec  4 05:24:59 np0005545273 systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec  4 05:24:59 np0005545273 systemd[1]: run-user-0.mount: Deactivated successfully.
Dec  4 05:24:59 np0005545273 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec  4 05:24:59 np0005545273 systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec  4 05:24:59 np0005545273 systemd[1]: Removed slice User Slice of UID 0.
Dec  4 05:24:59 np0005545273 python3.9[147603]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:24:59 np0005545273 python3.9[147755]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:25:00 np0005545273 python3.9[147907]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:25:00 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v425: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:25:01 np0005545273 python3.9[148059]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:25:01 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:25:01 np0005545273 python3.9[148209]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:25:02 np0005545273 python3.9[148361]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec  4 05:25:02 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v426: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:25:04 np0005545273 python3.9[148511]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:25:04 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v427: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:25:05 np0005545273 python3.9[148632]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764843903.6793542-86-78928593574865/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:25:05 np0005545273 python3.9[148782]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:25:06 np0005545273 python3.9[148903]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764843905.2663631-101-53476565445212/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:25:06 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:25:06 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v428: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:25:07 np0005545273 python3.9[149056]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  4 05:25:08 np0005545273 python3.9[149140]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  4 05:25:08 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v429: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:25:10 np0005545273 python3.9[149293]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  4 05:25:10 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v430: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:25:11 np0005545273 python3.9[149446]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:25:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:25:11 np0005545273 python3.9[149567]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764843910.9214911-138-74426705696416/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:25:12 np0005545273 python3.9[149717]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:25:12 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v431: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:25:12 np0005545273 python3.9[149838]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764843912.0229387-138-220106028479068/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:25:14 np0005545273 python3.9[149988]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:25:14 np0005545273 python3.9[150109]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764843913.7120917-182-37497758006658/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:25:14 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v432: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:25:15 np0005545273 python3.9[150259]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:25:15 np0005545273 python3.9[150380]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764843914.8843508-182-179715745958886/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:25:16 np0005545273 python3.9[150530]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:25:16 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:25:16 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v433: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:25:17 np0005545273 python3.9[150684]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:25:18 np0005545273 python3.9[150836]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:25:18 np0005545273 python3.9[150914]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:25:18 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v434: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:25:18 np0005545273 ovn_controller[146538]: 2025-12-04T10:25:18Z|00025|memory|INFO|16000 kB peak resident set size after 30.0 seconds
Dec  4 05:25:18 np0005545273 ovn_controller[146538]: 2025-12-04T10:25:18Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Dec  4 05:25:18 np0005545273 podman[150992]: 2025-12-04 10:25:18.988076642 +0000 UTC m=+0.091448108 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:25:19 np0005545273 python3.9[151095]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:25:19 np0005545273 python3.9[151175]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:25:20 np0005545273 python3.9[151327]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:25:20 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v435: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:25:21 np0005545273 python3.9[151479]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:25:21 np0005545273 python3.9[151557]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:25:21 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:25:22 np0005545273 python3.9[151709]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:25:22 np0005545273 python3.9[151787]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:25:22 np0005545273 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  4 05:25:22 np0005545273 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5444 writes, 23K keys, 5444 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5444 writes, 791 syncs, 6.88 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5444 writes, 23K keys, 5444 commit groups, 1.0 writes per commit group, ingest: 18.49 MB, 0.03 MB/s#012Interval WAL: 5444 writes, 791 syncs, 6.88 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Dec  4 05:25:22 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v436: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:25:23 np0005545273 python3.9[151939]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:25:23 np0005545273 systemd[1]: Reloading.
Dec  4 05:25:23 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:25:23 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:25:24 np0005545273 python3.9[152128]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:25:24 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v437: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:25:25 np0005545273 python3.9[152206]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:25:25 np0005545273 python3.9[152358]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:25:26 np0005545273 python3.9[152436]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:25:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:25:26
Dec  4 05:25:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:25:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:25:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', '.mgr', 'vms', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'backups']
Dec  4 05:25:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:25:26 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:25:26 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v438: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:25:27 np0005545273 python3.9[152588]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:25:27 np0005545273 systemd[1]: Reloading.
Dec  4 05:25:27 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:25:27 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:25:27 np0005545273 systemd[1]: Starting Create netns directory...
Dec  4 05:25:27 np0005545273 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  4 05:25:27 np0005545273 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  4 05:25:27 np0005545273 systemd[1]: Finished Create netns directory.
Dec  4 05:25:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:25:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:25:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:25:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:25:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:25:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:25:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:25:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:25:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:25:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:25:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:25:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:25:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:25:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:25:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:25:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:25:28 np0005545273 python3.9[152781]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:25:28 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v439: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:25:29 np0005545273 python3.9[152933]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:25:29 np0005545273 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  4 05:25:29 np0005545273 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 601.0 total, 600.0 interval#012Cumulative writes: 6918 writes, 28K keys, 6918 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6918 writes, 1283 syncs, 5.39 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6918 writes, 28K keys, 6918 commit groups, 1.0 writes per commit group, ingest: 19.58 MB, 0.03 MB/s#012Interval WAL: 6918 writes, 1283 syncs, 5.39 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.181       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.181       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.18              0.00         1    0.181       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Dec  4 05:25:29 np0005545273 python3.9[153056]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764843928.5339332-333-46758311429135/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:25:30 np0005545273 python3.9[153208]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:25:30 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v440: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:25:31 np0005545273 python3.9[153360]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:25:31 np0005545273 python3.9[153483]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764843930.5942059-358-135726253880116/.source.json _original_basename=.pmbtyb4d follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:25:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:25:32 np0005545273 python3.9[153637]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:25:32 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v441: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:25:34 np0005545273 python3.9[154064]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Dec  4 05:25:34 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v442: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:25:35 np0005545273 python3.9[154218]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  4 05:25:36 np0005545273 python3.9[154372]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  4 05:25:36 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:25:36 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v443: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:25:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:25:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:25:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:25:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:25:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:25:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:25:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:25:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:25:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:25:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:25:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:25:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:25:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec  4 05:25:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:25:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:25:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:25:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:25:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:25:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 05:25:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:25:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:25:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:25:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 05:25:37 np0005545273 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  4 05:25:37 np0005545273 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5475 writes, 24K keys, 5475 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5475 writes, 788 syncs, 6.95 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5475 writes, 24K keys, 5475 commit groups, 1.0 writes per commit group, ingest: 18.45 MB, 0.03 MB/s#012Interval WAL: 5475 writes, 788 syncs, 6.95 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Dec  4 05:25:37 np0005545273 python3[154551]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  4 05:25:38 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v444: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:25:40 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v445: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:25:41 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:25:42 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v446: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:25:43 np0005545273 ceph-mgr[75651]: [devicehealth INFO root] Check health
Dec  4 05:25:44 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v447: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:25:46 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:25:46 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v448: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:25:47 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:25:47 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:25:47 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:25:47 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:25:47 np0005545273 podman[154565]: 2025-12-04 10:25:47.472272391 +0000 UTC m=+9.592570572 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  4 05:25:47 np0005545273 podman[154806]: 2025-12-04 10:25:47.645563467 +0000 UTC m=+0.066316635 container create 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  4 05:25:47 np0005545273 podman[154806]: 2025-12-04 10:25:47.605599225 +0000 UTC m=+0.026352553 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  4 05:25:47 np0005545273 python3[154551]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  4 05:25:48 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec  4 05:25:48 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec  4 05:25:48 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:25:48 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:25:48 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:25:48 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:25:48 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:25:48 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:25:48 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:25:48 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:25:48 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:25:48 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:25:48 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:25:48 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:25:48 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:25:48 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:25:48 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec  4 05:25:48 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:25:48 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:25:48 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:25:48 np0005545273 python3.9[155077]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:25:48 np0005545273 podman[155092]: 2025-12-04 10:25:48.580944666 +0000 UTC m=+0.047764387 container create b1cabc71847cb1de5d9d4953e2662628ddcb771b43d91181b8177620728f4778 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_villani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  4 05:25:48 np0005545273 systemd[1]: Started libpod-conmon-b1cabc71847cb1de5d9d4953e2662628ddcb771b43d91181b8177620728f4778.scope.
Dec  4 05:25:48 np0005545273 podman[155092]: 2025-12-04 10:25:48.559580812 +0000 UTC m=+0.026400543 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:25:48 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:25:48 np0005545273 podman[155092]: 2025-12-04 10:25:48.702350659 +0000 UTC m=+0.169170400 container init b1cabc71847cb1de5d9d4953e2662628ddcb771b43d91181b8177620728f4778 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  4 05:25:48 np0005545273 podman[155092]: 2025-12-04 10:25:48.717017996 +0000 UTC m=+0.183837717 container start b1cabc71847cb1de5d9d4953e2662628ddcb771b43d91181b8177620728f4778 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_villani, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  4 05:25:48 np0005545273 podman[155092]: 2025-12-04 10:25:48.720855946 +0000 UTC m=+0.187675667 container attach b1cabc71847cb1de5d9d4953e2662628ddcb771b43d91181b8177620728f4778 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_villani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec  4 05:25:48 np0005545273 clever_villani[155132]: 167 167
Dec  4 05:25:48 np0005545273 systemd[1]: libpod-b1cabc71847cb1de5d9d4953e2662628ddcb771b43d91181b8177620728f4778.scope: Deactivated successfully.
Dec  4 05:25:48 np0005545273 conmon[155132]: conmon b1cabc71847cb1de5d9d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b1cabc71847cb1de5d9d4953e2662628ddcb771b43d91181b8177620728f4778.scope/container/memory.events
Dec  4 05:25:48 np0005545273 podman[155092]: 2025-12-04 10:25:48.727652276 +0000 UTC m=+0.194472007 container died b1cabc71847cb1de5d9d4953e2662628ddcb771b43d91181b8177620728f4778 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_villani, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:25:48 np0005545273 systemd[1]: var-lib-containers-storage-overlay-3ecd84b5dcf3b62c85b509e6749ae35dbde22e44988cea49d47fc5ada4c5a6a0-merged.mount: Deactivated successfully.
Dec  4 05:25:48 np0005545273 podman[155092]: 2025-12-04 10:25:48.7808505 +0000 UTC m=+0.247670251 container remove b1cabc71847cb1de5d9d4953e2662628ddcb771b43d91181b8177620728f4778 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_villani, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:25:48 np0005545273 systemd[1]: libpod-conmon-b1cabc71847cb1de5d9d4953e2662628ddcb771b43d91181b8177620728f4778.scope: Deactivated successfully.
Dec  4 05:25:48 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v449: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:25:49 np0005545273 podman[155230]: 2025-12-04 10:25:49.038225 +0000 UTC m=+0.070308518 container create 39ea713ac6c3829271db570dd0912d8aed44b29c4535c077fd472571f864b44d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_elbakyan, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec  4 05:25:49 np0005545273 systemd[1]: Started libpod-conmon-39ea713ac6c3829271db570dd0912d8aed44b29c4535c077fd472571f864b44d.scope.
Dec  4 05:25:49 np0005545273 podman[155230]: 2025-12-04 10:25:49.012675958 +0000 UTC m=+0.044759526 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:25:49 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:25:49 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9676313e55bf08d62f63bca3e70f56a8b99f0e994520702738239a1893ab2670/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:25:49 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9676313e55bf08d62f63bca3e70f56a8b99f0e994520702738239a1893ab2670/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:25:49 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9676313e55bf08d62f63bca3e70f56a8b99f0e994520702738239a1893ab2670/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:25:49 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9676313e55bf08d62f63bca3e70f56a8b99f0e994520702738239a1893ab2670/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:25:49 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9676313e55bf08d62f63bca3e70f56a8b99f0e994520702738239a1893ab2670/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:25:49 np0005545273 podman[155230]: 2025-12-04 10:25:49.158783353 +0000 UTC m=+0.190866911 container init 39ea713ac6c3829271db570dd0912d8aed44b29c4535c077fd472571f864b44d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_elbakyan, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:25:49 np0005545273 podman[155230]: 2025-12-04 10:25:49.172587159 +0000 UTC m=+0.204670717 container start 39ea713ac6c3829271db570dd0912d8aed44b29c4535c077fd472571f864b44d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_elbakyan, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:25:49 np0005545273 podman[155230]: 2025-12-04 10:25:49.179123894 +0000 UTC m=+0.211207412 container attach 39ea713ac6c3829271db570dd0912d8aed44b29c4535c077fd472571f864b44d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_elbakyan, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  4 05:25:49 np0005545273 podman[155269]: 2025-12-04 10:25:49.235764869 +0000 UTC m=+0.150264505 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:25:49 np0005545273 python3.9[155322]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:25:49 np0005545273 elated_elbakyan[155272]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:25:49 np0005545273 elated_elbakyan[155272]: --> All data devices are unavailable
Dec  4 05:25:49 np0005545273 systemd[1]: libpod-39ea713ac6c3829271db570dd0912d8aed44b29c4535c077fd472571f864b44d.scope: Deactivated successfully.
Dec  4 05:25:49 np0005545273 podman[155230]: 2025-12-04 10:25:49.772477736 +0000 UTC m=+0.804561274 container died 39ea713ac6c3829271db570dd0912d8aed44b29c4535c077fd472571f864b44d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_elbakyan, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  4 05:25:49 np0005545273 systemd[1]: var-lib-containers-storage-overlay-9676313e55bf08d62f63bca3e70f56a8b99f0e994520702738239a1893ab2670-merged.mount: Deactivated successfully.
Dec  4 05:25:49 np0005545273 python3.9[155417]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:25:49 np0005545273 podman[155230]: 2025-12-04 10:25:49.823824147 +0000 UTC m=+0.855907665 container remove 39ea713ac6c3829271db570dd0912d8aed44b29c4535c077fd472571f864b44d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:25:49 np0005545273 systemd[1]: libpod-conmon-39ea713ac6c3829271db570dd0912d8aed44b29c4535c077fd472571f864b44d.scope: Deactivated successfully.
Dec  4 05:25:50 np0005545273 podman[155578]: 2025-12-04 10:25:50.255506637 +0000 UTC m=+0.044896310 container create 579603ebd512a79ed2986baa8be599b1a6a55e1212ae7508b560b62efdf3cbfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_driscoll, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec  4 05:25:50 np0005545273 systemd[1]: Started libpod-conmon-579603ebd512a79ed2986baa8be599b1a6a55e1212ae7508b560b62efdf3cbfb.scope.
Dec  4 05:25:50 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:25:50 np0005545273 podman[155578]: 2025-12-04 10:25:50.231178654 +0000 UTC m=+0.020568347 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:25:50 np0005545273 podman[155578]: 2025-12-04 10:25:50.342591252 +0000 UTC m=+0.131980935 container init 579603ebd512a79ed2986baa8be599b1a6a55e1212ae7508b560b62efdf3cbfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_driscoll, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec  4 05:25:50 np0005545273 podman[155578]: 2025-12-04 10:25:50.349830952 +0000 UTC m=+0.139220615 container start 579603ebd512a79ed2986baa8be599b1a6a55e1212ae7508b560b62efdf3cbfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec  4 05:25:50 np0005545273 podman[155578]: 2025-12-04 10:25:50.353474208 +0000 UTC m=+0.142863971 container attach 579603ebd512a79ed2986baa8be599b1a6a55e1212ae7508b560b62efdf3cbfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_driscoll, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec  4 05:25:50 np0005545273 stupefied_driscoll[155629]: 167 167
Dec  4 05:25:50 np0005545273 systemd[1]: libpod-579603ebd512a79ed2986baa8be599b1a6a55e1212ae7508b560b62efdf3cbfb.scope: Deactivated successfully.
Dec  4 05:25:50 np0005545273 podman[155578]: 2025-12-04 10:25:50.355201069 +0000 UTC m=+0.144590732 container died 579603ebd512a79ed2986baa8be599b1a6a55e1212ae7508b560b62efdf3cbfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_driscoll, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  4 05:25:50 np0005545273 systemd[1]: var-lib-containers-storage-overlay-14759ef6adea5c2e69c1aa1c72e9aceedf4b5926d7d84f89b15735dd3b3b9c59-merged.mount: Deactivated successfully.
Dec  4 05:25:50 np0005545273 podman[155578]: 2025-12-04 10:25:50.394230029 +0000 UTC m=+0.183619692 container remove 579603ebd512a79ed2986baa8be599b1a6a55e1212ae7508b560b62efdf3cbfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:25:50 np0005545273 systemd[1]: libpod-conmon-579603ebd512a79ed2986baa8be599b1a6a55e1212ae7508b560b62efdf3cbfb.scope: Deactivated successfully.
Dec  4 05:25:50 np0005545273 python3.9[155677]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764843949.9030097-446-215465618958658/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:25:50 np0005545273 podman[155686]: 2025-12-04 10:25:50.59607831 +0000 UTC m=+0.060086339 container create 99252b2d53763ef9a74beb40dc66f00d44e5594d9e6b4d0ff7536481c04755bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_blackburn, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  4 05:25:50 np0005545273 systemd[1]: Started libpod-conmon-99252b2d53763ef9a74beb40dc66f00d44e5594d9e6b4d0ff7536481c04755bb.scope.
Dec  4 05:25:50 np0005545273 podman[155686]: 2025-12-04 10:25:50.561725779 +0000 UTC m=+0.025733848 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:25:50 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:25:50 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc8dd5693e440c227fbaa1a2942afa2ffd06610f1d72b6ed0d4fb6956ac5768d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:25:50 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc8dd5693e440c227fbaa1a2942afa2ffd06610f1d72b6ed0d4fb6956ac5768d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:25:50 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc8dd5693e440c227fbaa1a2942afa2ffd06610f1d72b6ed0d4fb6956ac5768d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:25:50 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc8dd5693e440c227fbaa1a2942afa2ffd06610f1d72b6ed0d4fb6956ac5768d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:25:50 np0005545273 podman[155686]: 2025-12-04 10:25:50.694329966 +0000 UTC m=+0.158337995 container init 99252b2d53763ef9a74beb40dc66f00d44e5594d9e6b4d0ff7536481c04755bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec  4 05:25:50 np0005545273 podman[155686]: 2025-12-04 10:25:50.701658679 +0000 UTC m=+0.165666688 container start 99252b2d53763ef9a74beb40dc66f00d44e5594d9e6b4d0ff7536481c04755bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:25:50 np0005545273 podman[155686]: 2025-12-04 10:25:50.70552332 +0000 UTC m=+0.169531339 container attach 99252b2d53763ef9a74beb40dc66f00d44e5594d9e6b4d0ff7536481c04755bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_blackburn, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:25:50 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v450: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]: {
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:    "0": [
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:        {
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            "devices": [
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "/dev/loop3"
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            ],
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            "lv_name": "ceph_lv0",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            "lv_size": "21470642176",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            "name": "ceph_lv0",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            "tags": {
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.cluster_name": "ceph",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.crush_device_class": "",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.encrypted": "0",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.objectstore": "bluestore",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.osd_id": "0",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.type": "block",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.vdo": "0",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.with_tpm": "0"
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            },
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            "type": "block",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            "vg_name": "ceph_vg0"
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:        }
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:    ],
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:    "1": [
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:        {
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            "devices": [
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "/dev/loop4"
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            ],
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            "lv_name": "ceph_lv1",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            "lv_size": "21470642176",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            "name": "ceph_lv1",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            "tags": {
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.cluster_name": "ceph",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.crush_device_class": "",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.encrypted": "0",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.objectstore": "bluestore",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.osd_id": "1",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.type": "block",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.vdo": "0",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.with_tpm": "0"
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            },
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            "type": "block",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            "vg_name": "ceph_vg1"
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:        }
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:    ],
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:    "2": [
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:        {
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            "devices": [
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "/dev/loop5"
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            ],
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            "lv_name": "ceph_lv2",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            "lv_size": "21470642176",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            "name": "ceph_lv2",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            "tags": {
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.cluster_name": "ceph",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.crush_device_class": "",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.encrypted": "0",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.objectstore": "bluestore",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.osd_id": "2",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.type": "block",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.vdo": "0",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:                "ceph.with_tpm": "0"
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            },
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            "type": "block",
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:            "vg_name": "ceph_vg2"
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:        }
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]:    ]
Dec  4 05:25:51 np0005545273 brave_blackburn[155709]: }
Dec  4 05:25:51 np0005545273 systemd[1]: libpod-99252b2d53763ef9a74beb40dc66f00d44e5594d9e6b4d0ff7536481c04755bb.scope: Deactivated successfully.
Dec  4 05:25:51 np0005545273 podman[155686]: 2025-12-04 10:25:51.054770816 +0000 UTC m=+0.518778865 container died 99252b2d53763ef9a74beb40dc66f00d44e5594d9e6b4d0ff7536481c04755bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_blackburn, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  4 05:25:51 np0005545273 systemd[1]: var-lib-containers-storage-overlay-cc8dd5693e440c227fbaa1a2942afa2ffd06610f1d72b6ed0d4fb6956ac5768d-merged.mount: Deactivated successfully.
Dec  4 05:25:51 np0005545273 podman[155686]: 2025-12-04 10:25:51.107845878 +0000 UTC m=+0.571853897 container remove 99252b2d53763ef9a74beb40dc66f00d44e5594d9e6b4d0ff7536481c04755bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_blackburn, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  4 05:25:51 np0005545273 systemd[1]: libpod-conmon-99252b2d53763ef9a74beb40dc66f00d44e5594d9e6b4d0ff7536481c04755bb.scope: Deactivated successfully.
Dec  4 05:25:51 np0005545273 python3.9[155782]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  4 05:25:51 np0005545273 systemd[1]: Reloading.
Dec  4 05:25:51 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:25:51 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:25:51 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:25:51 np0005545273 podman[155973]: 2025-12-04 10:25:51.876950675 +0000 UTC m=+0.060229021 container create bbe1a84f2080a97c9e0fe084dc4ad90e5c0f9f60367c4466658e006310e7e75e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_einstein, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Dec  4 05:25:51 np0005545273 systemd[1]: Started libpod-conmon-bbe1a84f2080a97c9e0fe084dc4ad90e5c0f9f60367c4466658e006310e7e75e.scope.
Dec  4 05:25:51 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:25:51 np0005545273 podman[155973]: 2025-12-04 10:25:51.854735782 +0000 UTC m=+0.038014128 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:25:51 np0005545273 podman[155973]: 2025-12-04 10:25:51.959147854 +0000 UTC m=+0.142426220 container init bbe1a84f2080a97c9e0fe084dc4ad90e5c0f9f60367c4466658e006310e7e75e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:25:51 np0005545273 podman[155973]: 2025-12-04 10:25:51.967705876 +0000 UTC m=+0.150984212 container start bbe1a84f2080a97c9e0fe084dc4ad90e5c0f9f60367c4466658e006310e7e75e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_einstein, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec  4 05:25:51 np0005545273 podman[155973]: 2025-12-04 10:25:51.971330822 +0000 UTC m=+0.154609178 container attach bbe1a84f2080a97c9e0fe084dc4ad90e5c0f9f60367c4466658e006310e7e75e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  4 05:25:51 np0005545273 admiring_einstein[155989]: 167 167
Dec  4 05:25:51 np0005545273 systemd[1]: libpod-bbe1a84f2080a97c9e0fe084dc4ad90e5c0f9f60367c4466658e006310e7e75e.scope: Deactivated successfully.
Dec  4 05:25:51 np0005545273 podman[155973]: 2025-12-04 10:25:51.974048016 +0000 UTC m=+0.157326352 container died bbe1a84f2080a97c9e0fe084dc4ad90e5c0f9f60367c4466658e006310e7e75e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:25:52 np0005545273 systemd[1]: var-lib-containers-storage-overlay-a0070fceb6f9a6d51fe2e5baeeec41cb9522f9539351a12f70ee9df63e7c2cc7-merged.mount: Deactivated successfully.
Dec  4 05:25:52 np0005545273 podman[155973]: 2025-12-04 10:25:52.015509753 +0000 UTC m=+0.198788099 container remove bbe1a84f2080a97c9e0fe084dc4ad90e5c0f9f60367c4466658e006310e7e75e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_einstein, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Dec  4 05:25:52 np0005545273 systemd[1]: libpod-conmon-bbe1a84f2080a97c9e0fe084dc4ad90e5c0f9f60367c4466658e006310e7e75e.scope: Deactivated successfully.
Dec  4 05:25:52 np0005545273 python3.9[155960]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:25:52 np0005545273 systemd[1]: Reloading.
Dec  4 05:25:52 np0005545273 podman[156016]: 2025-12-04 10:25:52.194999677 +0000 UTC m=+0.047814160 container create 6fa28e33e7eac3d71b50866d77d6661a101a11a22e8679f381d1ba65618b9267 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_banzai, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:25:52 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:25:52 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:25:52 np0005545273 podman[156016]: 2025-12-04 10:25:52.176216593 +0000 UTC m=+0.029031096 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:25:52 np0005545273 systemd[1]: Started libpod-conmon-6fa28e33e7eac3d71b50866d77d6661a101a11a22e8679f381d1ba65618b9267.scope.
Dec  4 05:25:52 np0005545273 systemd[1]: Starting ovn_metadata_agent container...
Dec  4 05:25:52 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:25:52 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1102cc9290d84dea1625a7f17ea358a196943a964ff0fe60ee539e4df9260fa6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:25:52 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1102cc9290d84dea1625a7f17ea358a196943a964ff0fe60ee539e4df9260fa6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:25:52 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1102cc9290d84dea1625a7f17ea358a196943a964ff0fe60ee539e4df9260fa6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:25:52 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1102cc9290d84dea1625a7f17ea358a196943a964ff0fe60ee539e4df9260fa6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:25:52 np0005545273 podman[156016]: 2025-12-04 10:25:52.510729402 +0000 UTC m=+0.363543905 container init 6fa28e33e7eac3d71b50866d77d6661a101a11a22e8679f381d1ba65618b9267 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec  4 05:25:52 np0005545273 podman[156016]: 2025-12-04 10:25:52.525874129 +0000 UTC m=+0.378688642 container start 6fa28e33e7eac3d71b50866d77d6661a101a11a22e8679f381d1ba65618b9267 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_banzai, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  4 05:25:52 np0005545273 podman[156016]: 2025-12-04 10:25:52.53142705 +0000 UTC m=+0.384241563 container attach 6fa28e33e7eac3d71b50866d77d6661a101a11a22e8679f381d1ba65618b9267 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  4 05:25:52 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:25:52 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b241c4d4c1b7ae85a99193280a3cd8c6217a74fed81c66a9a358d4273cda809/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec  4 05:25:52 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b241c4d4c1b7ae85a99193280a3cd8c6217a74fed81c66a9a358d4273cda809/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  4 05:25:52 np0005545273 systemd[1]: Started /usr/bin/podman healthcheck run 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567.
Dec  4 05:25:52 np0005545273 podman[156074]: 2025-12-04 10:25:52.642840477 +0000 UTC m=+0.159497412 container init 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  4 05:25:52 np0005545273 ovn_metadata_agent[156090]: + sudo -E kolla_set_configs
Dec  4 05:25:52 np0005545273 podman[156074]: 2025-12-04 10:25:52.670725645 +0000 UTC m=+0.187382570 container start 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  4 05:25:52 np0005545273 edpm-start-podman-container[156074]: ovn_metadata_agent
Dec  4 05:25:52 np0005545273 edpm-start-podman-container[156072]: Creating additional drop-in dependency for "ovn_metadata_agent" (292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567)
Dec  4 05:25:52 np0005545273 ovn_metadata_agent[156090]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  4 05:25:52 np0005545273 ovn_metadata_agent[156090]: INFO:__main__:Validating config file
Dec  4 05:25:52 np0005545273 ovn_metadata_agent[156090]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  4 05:25:52 np0005545273 ovn_metadata_agent[156090]: INFO:__main__:Copying service configuration files
Dec  4 05:25:52 np0005545273 ovn_metadata_agent[156090]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec  4 05:25:52 np0005545273 ovn_metadata_agent[156090]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec  4 05:25:52 np0005545273 ovn_metadata_agent[156090]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec  4 05:25:52 np0005545273 ovn_metadata_agent[156090]: INFO:__main__:Writing out command to execute
Dec  4 05:25:52 np0005545273 ovn_metadata_agent[156090]: INFO:__main__:Setting permission for /var/lib/neutron
Dec  4 05:25:52 np0005545273 ovn_metadata_agent[156090]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec  4 05:25:52 np0005545273 ovn_metadata_agent[156090]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec  4 05:25:52 np0005545273 ovn_metadata_agent[156090]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec  4 05:25:52 np0005545273 ovn_metadata_agent[156090]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec  4 05:25:52 np0005545273 ovn_metadata_agent[156090]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec  4 05:25:52 np0005545273 ovn_metadata_agent[156090]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec  4 05:25:52 np0005545273 podman[156096]: 2025-12-04 10:25:52.764853846 +0000 UTC m=+0.082388614 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:25:52 np0005545273 ovn_metadata_agent[156090]: ++ cat /run_command
Dec  4 05:25:52 np0005545273 ovn_metadata_agent[156090]: + CMD=neutron-ovn-metadata-agent
Dec  4 05:25:52 np0005545273 ovn_metadata_agent[156090]: + ARGS=
Dec  4 05:25:52 np0005545273 ovn_metadata_agent[156090]: + sudo kolla_copy_cacerts
Dec  4 05:25:52 np0005545273 systemd[1]: Reloading.
Dec  4 05:25:52 np0005545273 ovn_metadata_agent[156090]: + [[ ! -n '' ]]
Dec  4 05:25:52 np0005545273 ovn_metadata_agent[156090]: + . kolla_extend_start
Dec  4 05:25:52 np0005545273 ovn_metadata_agent[156090]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Dec  4 05:25:52 np0005545273 ovn_metadata_agent[156090]: Running command: 'neutron-ovn-metadata-agent'
Dec  4 05:25:52 np0005545273 ovn_metadata_agent[156090]: + umask 0022
Dec  4 05:25:52 np0005545273 ovn_metadata_agent[156090]: + exec neutron-ovn-metadata-agent
Dec  4 05:25:52 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:25:52 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:25:52 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v451: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:25:53 np0005545273 systemd[1]: Started ovn_metadata_agent container.
Dec  4 05:25:53 np0005545273 lvm[156276]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:25:53 np0005545273 lvm[156276]: VG ceph_vg1 finished
Dec  4 05:25:53 np0005545273 lvm[156275]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:25:53 np0005545273 lvm[156275]: VG ceph_vg0 finished
Dec  4 05:25:53 np0005545273 lvm[156278]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:25:53 np0005545273 lvm[156278]: VG ceph_vg2 finished
Dec  4 05:25:53 np0005545273 romantic_banzai[156070]: {}
Dec  4 05:25:53 np0005545273 systemd[1]: libpod-6fa28e33e7eac3d71b50866d77d6661a101a11a22e8679f381d1ba65618b9267.scope: Deactivated successfully.
Dec  4 05:25:53 np0005545273 systemd[1]: libpod-6fa28e33e7eac3d71b50866d77d6661a101a11a22e8679f381d1ba65618b9267.scope: Consumed 1.395s CPU time.
Dec  4 05:25:53 np0005545273 podman[156016]: 2025-12-04 10:25:53.41002465 +0000 UTC m=+1.262839163 container died 6fa28e33e7eac3d71b50866d77d6661a101a11a22e8679f381d1ba65618b9267 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec  4 05:25:53 np0005545273 systemd[1]: var-lib-containers-storage-overlay-1102cc9290d84dea1625a7f17ea358a196943a964ff0fe60ee539e4df9260fa6-merged.mount: Deactivated successfully.
Dec  4 05:25:53 np0005545273 podman[156016]: 2025-12-04 10:25:53.476348225 +0000 UTC m=+1.329162748 container remove 6fa28e33e7eac3d71b50866d77d6661a101a11a22e8679f381d1ba65618b9267 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:25:53 np0005545273 systemd[1]: libpod-conmon-6fa28e33e7eac3d71b50866d77d6661a101a11a22e8679f381d1ba65618b9267.scope: Deactivated successfully.
Dec  4 05:25:53 np0005545273 systemd[1]: session-48.scope: Deactivated successfully.
Dec  4 05:25:53 np0005545273 systemd[1]: session-48.scope: Consumed 56.768s CPU time.
Dec  4 05:25:53 np0005545273 systemd-logind[798]: Session 48 logged out. Waiting for processes to exit.
Dec  4 05:25:53 np0005545273 systemd-logind[798]: Removed session 48.
Dec  4 05:25:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:25:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:25:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:25:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:25:54 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:25:54 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.842 156095 INFO neutron.common.config [-] Logging enabled!#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.843 156095 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.843 156095 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.843 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.843 156095 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.844 156095 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.844 156095 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.844 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.844 156095 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.844 156095 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.844 156095 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.844 156095 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.844 156095 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.845 156095 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.845 156095 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.845 156095 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.845 156095 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.845 156095 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.845 156095 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.845 156095 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.845 156095 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.845 156095 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.846 156095 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.846 156095 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.846 156095 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.846 156095 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.846 156095 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.846 156095 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.846 156095 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.846 156095 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.847 156095 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.847 156095 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.847 156095 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.847 156095 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.847 156095 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.847 156095 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.847 156095 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.847 156095 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.848 156095 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.848 156095 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.848 156095 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.848 156095 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.848 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.848 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.848 156095 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.848 156095 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.848 156095 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.848 156095 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.849 156095 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.849 156095 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.849 156095 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.849 156095 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.849 156095 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.849 156095 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.849 156095 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.849 156095 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.849 156095 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.849 156095 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.850 156095 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.850 156095 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.850 156095 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.850 156095 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.850 156095 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.850 156095 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.850 156095 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.850 156095 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.851 156095 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.851 156095 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.851 156095 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.851 156095 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.851 156095 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.851 156095 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.851 156095 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.851 156095 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.851 156095 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.852 156095 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.852 156095 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.852 156095 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.852 156095 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.852 156095 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.852 156095 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.852 156095 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.852 156095 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.853 156095 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.853 156095 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.853 156095 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.853 156095 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.853 156095 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.853 156095 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.853 156095 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.853 156095 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.853 156095 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.853 156095 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.854 156095 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.854 156095 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.854 156095 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.854 156095 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.854 156095 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.854 156095 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.854 156095 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.854 156095 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.854 156095 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.854 156095 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.854 156095 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.855 156095 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.855 156095 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.855 156095 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.855 156095 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.855 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.855 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.855 156095 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.855 156095 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.856 156095 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.856 156095 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.856 156095 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.856 156095 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.856 156095 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.856 156095 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.856 156095 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.856 156095 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.856 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.857 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.857 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.857 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.857 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.857 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.857 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.857 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.857 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.857 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.858 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.858 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.858 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.858 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.858 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.858 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.858 156095 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.858 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.858 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.859 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.859 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.859 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.859 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.859 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.859 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.859 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.859 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.859 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.860 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.860 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.860 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.860 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.860 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.860 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.860 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.860 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.861 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.861 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.861 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.861 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.861 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.861 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.861 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.862 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.862 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.862 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.862 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.862 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.862 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.862 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.862 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.862 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.863 156095 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.863 156095 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.863 156095 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.863 156095 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.863 156095 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.863 156095 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.863 156095 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.863 156095 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.863 156095 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.863 156095 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.864 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.864 156095 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.864 156095 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.864 156095 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.864 156095 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.864 156095 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.864 156095 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.864 156095 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.864 156095 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.865 156095 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.865 156095 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.865 156095 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.865 156095 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.865 156095 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.865 156095 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.865 156095 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.865 156095 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.865 156095 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.866 156095 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.866 156095 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.866 156095 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.866 156095 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.866 156095 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.866 156095 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.866 156095 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.866 156095 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.866 156095 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.867 156095 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.867 156095 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.867 156095 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.867 156095 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.867 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.867 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.867 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.867 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.867 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.868 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.868 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.868 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.868 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.868 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.868 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.868 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.868 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.868 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.868 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.869 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.869 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.869 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.869 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.869 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.869 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.869 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.870 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.870 156095 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.870 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.870 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.870 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.870 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.870 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.870 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.870 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.871 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.871 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.871 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.871 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.871 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.871 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.871 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.871 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.871 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.872 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.872 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.872 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.872 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.872 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.872 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.872 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.872 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.873 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.873 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.873 156095 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.873 156095 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.873 156095 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.873 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.873 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.873 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.874 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.874 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.874 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.874 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.874 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.874 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.874 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.874 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.874 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.875 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.875 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.875 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.875 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.875 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.875 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.875 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.875 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.875 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.876 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.876 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.876 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.876 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.876 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.876 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.876 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.876 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.877 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.877 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.877 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.877 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.877 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.877 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.877 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.877 156095 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.877 156095 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.887 156095 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.888 156095 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.888 156095 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.888 156095 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.889 156095 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.901 156095 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 565580d5-3422-4e11-b563-3f1a3db67238 (UUID: 565580d5-3422-4e11-b563-3f1a3db67238) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.927 156095 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.927 156095 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.928 156095 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.928 156095 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.930 156095 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.937 156095 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Dec  4 05:25:54 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v452: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.942 156095 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '565580d5-3422-4e11-b563-3f1a3db67238'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f5e2acc0be0>], external_ids={}, name=565580d5-3422-4e11-b563-3f1a3db67238, nb_cfg_timestamp=1764843896953, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.943 156095 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f5e2ac3a310>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.944 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.944 156095 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.944 156095 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.944 156095 INFO oslo_service.service [-] Starting 1 workers#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.948 156095 DEBUG oslo_service.service [-] Started child 156321 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.952 156095 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpasqnjo3q/privsep.sock']#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.955 156321 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-169241'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Dec  4 05:25:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.998 156321 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Dec  4 05:25:55 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:54.999 156321 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Dec  4 05:25:55 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:55.000 156321 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  4 05:25:55 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:55.005 156321 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Dec  4 05:25:55 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:55.014 156321 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Dec  4 05:25:55 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:55.028 156321 INFO eventlet.wsgi.server [-] (156321) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Dec  4 05:25:55 np0005545273 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Dec  4 05:25:55 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:55.661 156095 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  4 05:25:55 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:55.662 156095 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpasqnjo3q/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  4 05:25:55 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:55.500 156326 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  4 05:25:55 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:55.508 156326 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  4 05:25:55 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:55.513 156326 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Dec  4 05:25:55 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:55.514 156326 INFO oslo.privsep.daemon [-] privsep daemon running as pid 156326#033[00m
Dec  4 05:25:55 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:55.666 156326 DEBUG oslo.privsep.daemon [-] privsep: reply[7da2aae2-f991-42f3-be8e-23af56f86d71]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.222 156326 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.223 156326 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.223 156326 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:25:56 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.874 156326 DEBUG oslo.privsep.daemon [-] privsep: reply[5b2250c3-ba22-49a4-8689-223efaca0ec9]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.877 156095 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=565580d5-3422-4e11-b563-3f1a3db67238, column=external_ids, values=({'neutron:ovn-metadata-id': 'c6ca2f93-5873-55c3-abb7-70ed124c9f2a'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.886 156095 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=565580d5-3422-4e11-b563-3f1a3db67238, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.893 156095 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.893 156095 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.894 156095 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.894 156095 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.894 156095 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.894 156095 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.894 156095 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.894 156095 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.894 156095 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.894 156095 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.894 156095 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.895 156095 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.895 156095 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.895 156095 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.895 156095 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.895 156095 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.895 156095 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.895 156095 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.895 156095 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.895 156095 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.896 156095 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.896 156095 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.896 156095 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.896 156095 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.896 156095 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.896 156095 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.896 156095 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.896 156095 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.897 156095 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.897 156095 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.897 156095 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.897 156095 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.897 156095 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.897 156095 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.897 156095 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.897 156095 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.898 156095 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.898 156095 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.898 156095 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.898 156095 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.898 156095 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.898 156095 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.898 156095 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.898 156095 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.899 156095 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.899 156095 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.899 156095 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.899 156095 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.899 156095 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.899 156095 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.899 156095 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.899 156095 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.899 156095 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.900 156095 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.900 156095 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.900 156095 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.900 156095 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.900 156095 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.900 156095 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.900 156095 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.901 156095 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.901 156095 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.901 156095 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.901 156095 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.901 156095 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.901 156095 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.901 156095 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.902 156095 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.902 156095 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.902 156095 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.902 156095 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.902 156095 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.902 156095 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.902 156095 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.902 156095 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.903 156095 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.903 156095 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.903 156095 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.903 156095 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.903 156095 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.903 156095 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.903 156095 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.903 156095 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.903 156095 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.904 156095 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.904 156095 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.904 156095 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.904 156095 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.904 156095 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.904 156095 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.904 156095 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.904 156095 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.904 156095 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.905 156095 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.905 156095 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.905 156095 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.905 156095 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.905 156095 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.905 156095 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.905 156095 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.905 156095 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.906 156095 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.906 156095 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.906 156095 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.906 156095 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.906 156095 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.906 156095 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.906 156095 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.906 156095 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.907 156095 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.907 156095 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.907 156095 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.907 156095 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.907 156095 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.907 156095 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.907 156095 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.907 156095 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.908 156095 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.908 156095 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.908 156095 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.908 156095 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.908 156095 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.908 156095 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.908 156095 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.908 156095 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.909 156095 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.909 156095 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.909 156095 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.909 156095 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.909 156095 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.909 156095 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.909 156095 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.909 156095 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.910 156095 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.910 156095 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.910 156095 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.910 156095 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.910 156095 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.910 156095 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.910 156095 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.910 156095 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.911 156095 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.911 156095 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.911 156095 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.911 156095 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.911 156095 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.911 156095 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.911 156095 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.911 156095 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.912 156095 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.912 156095 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.912 156095 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.912 156095 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.912 156095 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.912 156095 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.912 156095 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.912 156095 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.913 156095 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.913 156095 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.913 156095 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.913 156095 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.913 156095 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.913 156095 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.913 156095 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.913 156095 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.914 156095 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.914 156095 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.914 156095 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.914 156095 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.914 156095 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.914 156095 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.920 156095 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.920 156095 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.921 156095 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.921 156095 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.921 156095 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.921 156095 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.921 156095 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.921 156095 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.922 156095 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.922 156095 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.922 156095 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.922 156095 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.922 156095 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.922 156095 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.922 156095 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.923 156095 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.923 156095 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.923 156095 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.923 156095 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.923 156095 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.923 156095 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.923 156095 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.923 156095 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.924 156095 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.924 156095 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.924 156095 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.924 156095 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.924 156095 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.924 156095 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.924 156095 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.925 156095 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.925 156095 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.925 156095 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.925 156095 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.925 156095 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.925 156095 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.925 156095 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.925 156095 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.925 156095 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.926 156095 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.926 156095 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.926 156095 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.926 156095 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.926 156095 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.926 156095 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.926 156095 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.926 156095 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.926 156095 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.927 156095 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.927 156095 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.927 156095 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.927 156095 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.927 156095 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.927 156095 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.927 156095 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.927 156095 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.927 156095 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.927 156095 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.928 156095 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.928 156095 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.928 156095 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.928 156095 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.928 156095 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.928 156095 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.928 156095 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.928 156095 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.928 156095 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.928 156095 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.929 156095 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.929 156095 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.929 156095 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.929 156095 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.929 156095 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.929 156095 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.929 156095 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.929 156095 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.930 156095 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.930 156095 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.930 156095 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.930 156095 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.930 156095 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.930 156095 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.930 156095 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.930 156095 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.930 156095 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.931 156095 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.931 156095 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.931 156095 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.931 156095 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.931 156095 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.931 156095 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.931 156095 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.931 156095 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.932 156095 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.932 156095 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.932 156095 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.932 156095 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.932 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.932 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.932 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.932 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.932 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.933 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.933 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.933 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.933 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.933 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.933 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.933 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.933 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.934 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.934 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.934 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.934 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.934 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.934 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.934 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.934 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.934 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.934 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.935 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.935 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.935 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.935 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.935 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.935 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.935 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.936 156095 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.936 156095 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.936 156095 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.936 156095 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.936 156095 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:25:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:25:56.936 156095 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  4 05:25:56 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v453: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:25:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:25:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:25:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:25:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:25:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:25:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:25:58 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v454: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:25:59 np0005545273 systemd-logind[798]: New session 49 of user zuul.
Dec  4 05:25:59 np0005545273 systemd[1]: Started Session 49 of User zuul.
Dec  4 05:26:00 np0005545273 python3.9[156486]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:26:00 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v455: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:26:01 np0005545273 python3.9[156642]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:26:01 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:26:02 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v456: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:26:03 np0005545273 python3.9[156806]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  4 05:26:03 np0005545273 systemd[1]: Reloading.
Dec  4 05:26:03 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:26:03 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:26:04 np0005545273 python3.9[156991]: ansible-ansible.builtin.service_facts Invoked
Dec  4 05:26:04 np0005545273 network[157008]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  4 05:26:04 np0005545273 network[157009]: 'network-scripts' will be removed from distribution in near future.
Dec  4 05:26:04 np0005545273 network[157010]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  4 05:26:04 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v457: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:26:06 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:26:06 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v458: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:26:08 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v459: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:26:09 np0005545273 python3.9[157273]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:26:10 np0005545273 python3.9[157428]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:26:10 np0005545273 python3.9[157581]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:26:10 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v460: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:26:11 np0005545273 python3.9[157734]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:26:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:26:12 np0005545273 python3.9[157887]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:26:12 np0005545273 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  4 05:26:12 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v461: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:26:13 np0005545273 python3.9[158041]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:26:14 np0005545273 python3.9[158194]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:26:14 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v462: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:26:15 np0005545273 python3.9[158347]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:26:15 np0005545273 python3.9[158499]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:26:16 np0005545273 python3.9[158651]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:26:16 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:26:16 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v463: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:26:17 np0005545273 python3.9[158803]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:26:17 np0005545273 python3.9[158955]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:26:18 np0005545273 python3.9[159107]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:26:18 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v464: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:26:19 np0005545273 python3.9[159259]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:26:20 np0005545273 podman[159382]: 2025-12-04 10:26:20.069866866 +0000 UTC m=+0.180208837 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  4 05:26:20 np0005545273 python3.9[159429]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:26:20 np0005545273 python3.9[159589]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:26:20 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v465: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:26:21 np0005545273 python3.9[159741]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:26:21 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:26:22 np0005545273 python3.9[159893]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:26:22 np0005545273 python3.9[160045]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:26:22 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v466: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:26:22 np0005545273 podman[160076]: 2025-12-04 10:26:22.957004408 +0000 UTC m=+0.058645176 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:26:23 np0005545273 python3.9[160216]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:26:23 np0005545273 python3.9[160368]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:26:24 np0005545273 python3.9[160520]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:26:24 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v467: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:26:25 np0005545273 python3.9[160672]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  4 05:26:26 np0005545273 python3.9[160824]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  4 05:26:26 np0005545273 systemd[1]: Reloading.
Dec  4 05:26:26 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:26:26 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:26:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:26:26
Dec  4 05:26:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:26:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:26:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'vms', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'backups', 'volumes', '.rgw.root', 'default.rgw.log']
Dec  4 05:26:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:26:26 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:26:26 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v468: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:26:27 np0005545273 python3.9[161012]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:26:27 np0005545273 python3.9[161165]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:26:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:26:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:26:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:26:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:26:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:26:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:26:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:26:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:26:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:26:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:26:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:26:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:26:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:26:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:26:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:26:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:26:28 np0005545273 python3.9[161318]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:26:28 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v469: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:26:29 np0005545273 python3.9[161471]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:26:29 np0005545273 python3.9[161624]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:26:30 np0005545273 python3.9[161777]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:26:30 np0005545273 python3.9[161930]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:26:30 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v470: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:26:31 np0005545273 python3.9[162083]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec  4 05:26:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:26:32 np0005545273 python3.9[162236]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  4 05:26:32 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v471: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:26:33 np0005545273 python3.9[162394]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  4 05:26:34 np0005545273 python3.9[162554]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  4 05:26:34 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v472: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:26:35 np0005545273 python3.9[162638]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  4 05:26:36 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:26:36 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v473: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:26:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:26:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:26:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:26:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:26:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:26:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:26:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:26:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:26:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:26:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:26:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:26:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:26:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec  4 05:26:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:26:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:26:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:26:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:26:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:26:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 05:26:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:26:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:26:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:26:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 05:26:38 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v474: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:26:40 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v475: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:26:41 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:26:42 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v476: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:26:44 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v477: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:26:46 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:26:46 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v478: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:26:48 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v479: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:26:50 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v480: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:26:51 np0005545273 podman[162652]: 2025-12-04 10:26:51.202049799 +0000 UTC m=+0.303130111 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  4 05:26:51 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:26:52 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v481: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:26:53 np0005545273 podman[162704]: 2025-12-04 10:26:53.8016764 +0000 UTC m=+0.058361749 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  4 05:26:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:26:54 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:26:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:26:54 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:26:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:26:54 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:26:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:26:54 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:26:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:26:54 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:26:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:26:54 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:26:54 np0005545273 podman[162868]: 2025-12-04 10:26:54.881518505 +0000 UTC m=+0.052548012 container create 3f015aab5ee1a1103a7382ab379f488121024221faa3ccda6a23f484839802ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:26:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:26:54.890 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:26:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:26:54.893 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:26:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:26:54.893 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:26:54 np0005545273 systemd[1]: Started libpod-conmon-3f015aab5ee1a1103a7382ab379f488121024221faa3ccda6a23f484839802ee.scope.
Dec  4 05:26:54 np0005545273 podman[162868]: 2025-12-04 10:26:54.853305879 +0000 UTC m=+0.024335406 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:26:54 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v482: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Dec  4 05:26:54 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:26:54 np0005545273 podman[162868]: 2025-12-04 10:26:54.990060959 +0000 UTC m=+0.161090486 container init 3f015aab5ee1a1103a7382ab379f488121024221faa3ccda6a23f484839802ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mcnulty, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:26:54 np0005545273 podman[162868]: 2025-12-04 10:26:54.999163134 +0000 UTC m=+0.170192641 container start 3f015aab5ee1a1103a7382ab379f488121024221faa3ccda6a23f484839802ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mcnulty, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  4 05:26:55 np0005545273 podman[162868]: 2025-12-04 10:26:55.002613626 +0000 UTC m=+0.173643133 container attach 3f015aab5ee1a1103a7382ab379f488121024221faa3ccda6a23f484839802ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mcnulty, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:26:55 np0005545273 musing_mcnulty[162887]: 167 167
Dec  4 05:26:55 np0005545273 systemd[1]: libpod-3f015aab5ee1a1103a7382ab379f488121024221faa3ccda6a23f484839802ee.scope: Deactivated successfully.
Dec  4 05:26:55 np0005545273 podman[162868]: 2025-12-04 10:26:55.006012126 +0000 UTC m=+0.177041633 container died 3f015aab5ee1a1103a7382ab379f488121024221faa3ccda6a23f484839802ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mcnulty, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec  4 05:26:55 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:26:55 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:26:55 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:26:55 np0005545273 systemd[1]: var-lib-containers-storage-overlay-faeb26bc59c590652d44895cb3755ba3b3a4b8dc99701c5deb178b9ea288300b-merged.mount: Deactivated successfully.
Dec  4 05:26:55 np0005545273 podman[162868]: 2025-12-04 10:26:55.053684472 +0000 UTC m=+0.224713979 container remove 3f015aab5ee1a1103a7382ab379f488121024221faa3ccda6a23f484839802ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mcnulty, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:26:55 np0005545273 systemd[1]: libpod-conmon-3f015aab5ee1a1103a7382ab379f488121024221faa3ccda6a23f484839802ee.scope: Deactivated successfully.
Dec  4 05:26:55 np0005545273 podman[162920]: 2025-12-04 10:26:55.233628802 +0000 UTC m=+0.049042419 container create 23dd1b899189d8c61fd3d043a1722532684afd8dad78bc374d3b4a0d32a7be0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_spence, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  4 05:26:55 np0005545273 systemd[1]: Started libpod-conmon-23dd1b899189d8c61fd3d043a1722532684afd8dad78bc374d3b4a0d32a7be0c.scope.
Dec  4 05:26:55 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:26:55 np0005545273 podman[162920]: 2025-12-04 10:26:55.213842785 +0000 UTC m=+0.029256412 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:26:55 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3baa8bd1bf11f3007bf05cfbfbf641a526fd628d40915233ae1bf4dacbe136fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:26:55 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3baa8bd1bf11f3007bf05cfbfbf641a526fd628d40915233ae1bf4dacbe136fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:26:55 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3baa8bd1bf11f3007bf05cfbfbf641a526fd628d40915233ae1bf4dacbe136fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:26:55 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3baa8bd1bf11f3007bf05cfbfbf641a526fd628d40915233ae1bf4dacbe136fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:26:55 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3baa8bd1bf11f3007bf05cfbfbf641a526fd628d40915233ae1bf4dacbe136fb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:26:55 np0005545273 podman[162920]: 2025-12-04 10:26:55.323463944 +0000 UTC m=+0.138877561 container init 23dd1b899189d8c61fd3d043a1722532684afd8dad78bc374d3b4a0d32a7be0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_spence, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:26:55 np0005545273 podman[162920]: 2025-12-04 10:26:55.331718309 +0000 UTC m=+0.147131926 container start 23dd1b899189d8c61fd3d043a1722532684afd8dad78bc374d3b4a0d32a7be0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_spence, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec  4 05:26:55 np0005545273 podman[162920]: 2025-12-04 10:26:55.337083176 +0000 UTC m=+0.152496843 container attach 23dd1b899189d8c61fd3d043a1722532684afd8dad78bc374d3b4a0d32a7be0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_spence, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec  4 05:26:55 np0005545273 boring_spence[162941]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:26:55 np0005545273 boring_spence[162941]: --> All data devices are unavailable
Dec  4 05:26:55 np0005545273 systemd[1]: libpod-23dd1b899189d8c61fd3d043a1722532684afd8dad78bc374d3b4a0d32a7be0c.scope: Deactivated successfully.
Dec  4 05:26:55 np0005545273 podman[162979]: 2025-12-04 10:26:55.880644614 +0000 UTC m=+0.029558199 container died 23dd1b899189d8c61fd3d043a1722532684afd8dad78bc374d3b4a0d32a7be0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:26:55 np0005545273 systemd[1]: var-lib-containers-storage-overlay-3baa8bd1bf11f3007bf05cfbfbf641a526fd628d40915233ae1bf4dacbe136fb-merged.mount: Deactivated successfully.
Dec  4 05:26:55 np0005545273 podman[162979]: 2025-12-04 10:26:55.922725408 +0000 UTC m=+0.071638973 container remove 23dd1b899189d8c61fd3d043a1722532684afd8dad78bc374d3b4a0d32a7be0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec  4 05:26:55 np0005545273 systemd[1]: libpod-conmon-23dd1b899189d8c61fd3d043a1722532684afd8dad78bc374d3b4a0d32a7be0c.scope: Deactivated successfully.
Dec  4 05:26:56 np0005545273 podman[163076]: 2025-12-04 10:26:56.35254705 +0000 UTC m=+0.046890679 container create ef0f091fde96c8d1956397466cba329db27fdeef41ddd690f3fccf41acf983c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_cohen, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  4 05:26:56 np0005545273 systemd[1]: Started libpod-conmon-ef0f091fde96c8d1956397466cba329db27fdeef41ddd690f3fccf41acf983c3.scope.
Dec  4 05:26:56 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:26:56 np0005545273 podman[163076]: 2025-12-04 10:26:56.332919716 +0000 UTC m=+0.027263355 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:26:56 np0005545273 podman[163076]: 2025-12-04 10:26:56.436669617 +0000 UTC m=+0.131013256 container init ef0f091fde96c8d1956397466cba329db27fdeef41ddd690f3fccf41acf983c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:26:56 np0005545273 podman[163076]: 2025-12-04 10:26:56.4431613 +0000 UTC m=+0.137504919 container start ef0f091fde96c8d1956397466cba329db27fdeef41ddd690f3fccf41acf983c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_cohen, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:26:56 np0005545273 elegant_cohen[163098]: 167 167
Dec  4 05:26:56 np0005545273 podman[163076]: 2025-12-04 10:26:56.447289867 +0000 UTC m=+0.141633516 container attach ef0f091fde96c8d1956397466cba329db27fdeef41ddd690f3fccf41acf983c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec  4 05:26:56 np0005545273 systemd[1]: libpod-ef0f091fde96c8d1956397466cba329db27fdeef41ddd690f3fccf41acf983c3.scope: Deactivated successfully.
Dec  4 05:26:56 np0005545273 conmon[163098]: conmon ef0f091fde96c8d19563 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ef0f091fde96c8d1956397466cba329db27fdeef41ddd690f3fccf41acf983c3.scope/container/memory.events
Dec  4 05:26:56 np0005545273 podman[163076]: 2025-12-04 10:26:56.449878038 +0000 UTC m=+0.144221677 container died ef0f091fde96c8d1956397466cba329db27fdeef41ddd690f3fccf41acf983c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:26:56 np0005545273 systemd[1]: var-lib-containers-storage-overlay-db1760ef700d410dca131626fcb47b105a93cf2e4ca17d1123cf36277483663e-merged.mount: Deactivated successfully.
Dec  4 05:26:56 np0005545273 podman[163076]: 2025-12-04 10:26:56.488157893 +0000 UTC m=+0.182501532 container remove ef0f091fde96c8d1956397466cba329db27fdeef41ddd690f3fccf41acf983c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_cohen, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:26:56 np0005545273 systemd[1]: libpod-conmon-ef0f091fde96c8d1956397466cba329db27fdeef41ddd690f3fccf41acf983c3.scope: Deactivated successfully.
Dec  4 05:26:56 np0005545273 podman[163130]: 2025-12-04 10:26:56.659324836 +0000 UTC m=+0.042191148 container create b2cc6004a3b30afe155016c00c8b6b35100f349deb364d4f8e125e500c7bbd73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_pare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec  4 05:26:56 np0005545273 systemd[1]: Started libpod-conmon-b2cc6004a3b30afe155016c00c8b6b35100f349deb364d4f8e125e500c7bbd73.scope.
Dec  4 05:26:56 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:26:56 np0005545273 podman[163130]: 2025-12-04 10:26:56.639862476 +0000 UTC m=+0.022728808 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:26:56 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81f7a4944c52f42f485e53a72811c42a3fb53334c108e04d59dd179233e166eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:26:56 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81f7a4944c52f42f485e53a72811c42a3fb53334c108e04d59dd179233e166eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:26:56 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81f7a4944c52f42f485e53a72811c42a3fb53334c108e04d59dd179233e166eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:26:56 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81f7a4944c52f42f485e53a72811c42a3fb53334c108e04d59dd179233e166eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:26:56 np0005545273 podman[163130]: 2025-12-04 10:26:56.754017293 +0000 UTC m=+0.136883615 container init b2cc6004a3b30afe155016c00c8b6b35100f349deb364d4f8e125e500c7bbd73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_pare, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec  4 05:26:56 np0005545273 podman[163130]: 2025-12-04 10:26:56.760619208 +0000 UTC m=+0.143485520 container start b2cc6004a3b30afe155016c00c8b6b35100f349deb364d4f8e125e500c7bbd73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec  4 05:26:56 np0005545273 podman[163130]: 2025-12-04 10:26:56.763837784 +0000 UTC m=+0.146704146 container attach b2cc6004a3b30afe155016c00c8b6b35100f349deb364d4f8e125e500c7bbd73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_pare, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:26:56 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:26:56 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v483: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 20 op/s
Dec  4 05:26:57 np0005545273 gifted_pare[163149]: {
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:    "0": [
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:        {
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            "devices": [
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "/dev/loop3"
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            ],
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            "lv_name": "ceph_lv0",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            "lv_size": "21470642176",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            "name": "ceph_lv0",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            "tags": {
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.cluster_name": "ceph",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.crush_device_class": "",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.encrypted": "0",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.objectstore": "bluestore",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.osd_id": "0",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.type": "block",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.vdo": "0",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.with_tpm": "0"
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            },
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            "type": "block",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            "vg_name": "ceph_vg0"
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:        }
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:    ],
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:    "1": [
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:        {
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            "devices": [
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "/dev/loop4"
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            ],
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            "lv_name": "ceph_lv1",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            "lv_size": "21470642176",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            "name": "ceph_lv1",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            "tags": {
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.cluster_name": "ceph",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.crush_device_class": "",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.encrypted": "0",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.objectstore": "bluestore",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.osd_id": "1",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.type": "block",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.vdo": "0",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.with_tpm": "0"
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            },
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            "type": "block",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            "vg_name": "ceph_vg1"
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:        }
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:    ],
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:    "2": [
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:        {
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            "devices": [
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "/dev/loop5"
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            ],
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            "lv_name": "ceph_lv2",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            "lv_size": "21470642176",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            "name": "ceph_lv2",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            "tags": {
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.cluster_name": "ceph",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.crush_device_class": "",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.encrypted": "0",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.objectstore": "bluestore",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.osd_id": "2",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.type": "block",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.vdo": "0",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:                "ceph.with_tpm": "0"
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            },
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            "type": "block",
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:            "vg_name": "ceph_vg2"
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:        }
Dec  4 05:26:57 np0005545273 gifted_pare[163149]:    ]
Dec  4 05:26:57 np0005545273 gifted_pare[163149]: }
Dec  4 05:26:57 np0005545273 systemd[1]: libpod-b2cc6004a3b30afe155016c00c8b6b35100f349deb364d4f8e125e500c7bbd73.scope: Deactivated successfully.
Dec  4 05:26:57 np0005545273 podman[163130]: 2025-12-04 10:26:57.060515601 +0000 UTC m=+0.443381933 container died b2cc6004a3b30afe155016c00c8b6b35100f349deb364d4f8e125e500c7bbd73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_pare, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:26:57 np0005545273 systemd[1]: var-lib-containers-storage-overlay-81f7a4944c52f42f485e53a72811c42a3fb53334c108e04d59dd179233e166eb-merged.mount: Deactivated successfully.
Dec  4 05:26:57 np0005545273 podman[163130]: 2025-12-04 10:26:57.103054036 +0000 UTC m=+0.485920348 container remove b2cc6004a3b30afe155016c00c8b6b35100f349deb364d4f8e125e500c7bbd73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_pare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec  4 05:26:57 np0005545273 systemd[1]: libpod-conmon-b2cc6004a3b30afe155016c00c8b6b35100f349deb364d4f8e125e500c7bbd73.scope: Deactivated successfully.
Dec  4 05:26:57 np0005545273 podman[163258]: 2025-12-04 10:26:57.57700055 +0000 UTC m=+0.063415619 container create f8abf9b40d8a82a27913b98566e109cb1682e05bdc1b4f40e91bed2471c95cb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec  4 05:26:57 np0005545273 systemd[1]: Started libpod-conmon-f8abf9b40d8a82a27913b98566e109cb1682e05bdc1b4f40e91bed2471c95cb3.scope.
Dec  4 05:26:57 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:26:57 np0005545273 podman[163258]: 2025-12-04 10:26:57.550223778 +0000 UTC m=+0.036638867 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:26:57 np0005545273 podman[163258]: 2025-12-04 10:26:57.654961012 +0000 UTC m=+0.141376071 container init f8abf9b40d8a82a27913b98566e109cb1682e05bdc1b4f40e91bed2471c95cb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_taussig, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:26:57 np0005545273 podman[163258]: 2025-12-04 10:26:57.66335449 +0000 UTC m=+0.149769539 container start f8abf9b40d8a82a27913b98566e109cb1682e05bdc1b4f40e91bed2471c95cb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_taussig, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec  4 05:26:57 np0005545273 compassionate_taussig[163278]: 167 167
Dec  4 05:26:57 np0005545273 podman[163258]: 2025-12-04 10:26:57.66677616 +0000 UTC m=+0.153191239 container attach f8abf9b40d8a82a27913b98566e109cb1682e05bdc1b4f40e91bed2471c95cb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:26:57 np0005545273 systemd[1]: libpod-f8abf9b40d8a82a27913b98566e109cb1682e05bdc1b4f40e91bed2471c95cb3.scope: Deactivated successfully.
Dec  4 05:26:57 np0005545273 podman[163258]: 2025-12-04 10:26:57.66803468 +0000 UTC m=+0.154449739 container died f8abf9b40d8a82a27913b98566e109cb1682e05bdc1b4f40e91bed2471c95cb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec  4 05:26:57 np0005545273 systemd[1]: var-lib-containers-storage-overlay-0cae07ce66045a26bfe2f671c6cfb3790e9836dd99e5b16a5b373f496bba80a9-merged.mount: Deactivated successfully.
Dec  4 05:26:57 np0005545273 podman[163258]: 2025-12-04 10:26:57.716380582 +0000 UTC m=+0.202795651 container remove f8abf9b40d8a82a27913b98566e109cb1682e05bdc1b4f40e91bed2471c95cb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec  4 05:26:57 np0005545273 systemd[1]: libpod-conmon-f8abf9b40d8a82a27913b98566e109cb1682e05bdc1b4f40e91bed2471c95cb3.scope: Deactivated successfully.
Dec  4 05:26:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:26:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:26:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:26:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:26:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:26:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:26:57 np0005545273 podman[163309]: 2025-12-04 10:26:57.937902004 +0000 UTC m=+0.075464903 container create b4cc0bb599b733b2bfbbf4764ee0c60305d0f1121e3469fd82c470c5844e3426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wing, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  4 05:26:57 np0005545273 systemd[1]: Started libpod-conmon-b4cc0bb599b733b2bfbbf4764ee0c60305d0f1121e3469fd82c470c5844e3426.scope.
Dec  4 05:26:58 np0005545273 podman[163309]: 2025-12-04 10:26:57.909602626 +0000 UTC m=+0.047165455 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:26:58 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:26:58 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adc89fa3861568161bf283007df17774ba1fe410d4c65623e08ca0423786f3e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:26:58 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adc89fa3861568161bf283007df17774ba1fe410d4c65623e08ca0423786f3e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:26:58 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adc89fa3861568161bf283007df17774ba1fe410d4c65623e08ca0423786f3e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:26:58 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adc89fa3861568161bf283007df17774ba1fe410d4c65623e08ca0423786f3e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:26:58 np0005545273 podman[163309]: 2025-12-04 10:26:58.037932958 +0000 UTC m=+0.175495847 container init b4cc0bb599b733b2bfbbf4764ee0c60305d0f1121e3469fd82c470c5844e3426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  4 05:26:58 np0005545273 podman[163309]: 2025-12-04 10:26:58.052363808 +0000 UTC m=+0.189926597 container start b4cc0bb599b733b2bfbbf4764ee0c60305d0f1121e3469fd82c470c5844e3426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wing, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:26:58 np0005545273 podman[163309]: 2025-12-04 10:26:58.056638009 +0000 UTC m=+0.194200848 container attach b4cc0bb599b733b2bfbbf4764ee0c60305d0f1121e3469fd82c470c5844e3426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wing, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:26:58 np0005545273 lvm[163436]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:26:58 np0005545273 lvm[163437]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:26:58 np0005545273 lvm[163440]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:26:58 np0005545273 lvm[163440]: VG ceph_vg2 finished
Dec  4 05:26:58 np0005545273 lvm[163437]: VG ceph_vg1 finished
Dec  4 05:26:58 np0005545273 lvm[163436]: VG ceph_vg0 finished
Dec  4 05:26:58 np0005545273 tender_wing[163329]: {}
Dec  4 05:26:58 np0005545273 systemd[1]: libpod-b4cc0bb599b733b2bfbbf4764ee0c60305d0f1121e3469fd82c470c5844e3426.scope: Deactivated successfully.
Dec  4 05:26:58 np0005545273 systemd[1]: libpod-b4cc0bb599b733b2bfbbf4764ee0c60305d0f1121e3469fd82c470c5844e3426.scope: Consumed 1.351s CPU time.
Dec  4 05:26:58 np0005545273 podman[163309]: 2025-12-04 10:26:58.940328881 +0000 UTC m=+1.077891670 container died b4cc0bb599b733b2bfbbf4764ee0c60305d0f1121e3469fd82c470c5844e3426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True)
Dec  4 05:26:58 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v484: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  4 05:26:59 np0005545273 systemd[1]: var-lib-containers-storage-overlay-adc89fa3861568161bf283007df17774ba1fe410d4c65623e08ca0423786f3e4-merged.mount: Deactivated successfully.
Dec  4 05:26:59 np0005545273 podman[163309]: 2025-12-04 10:26:59.058273406 +0000 UTC m=+1.195836195 container remove b4cc0bb599b733b2bfbbf4764ee0c60305d0f1121e3469fd82c470c5844e3426 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_wing, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  4 05:26:59 np0005545273 systemd[1]: libpod-conmon-b4cc0bb599b733b2bfbbf4764ee0c60305d0f1121e3469fd82c470c5844e3426.scope: Deactivated successfully.
Dec  4 05:26:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:26:59 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:26:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:26:59 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:27:00 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:27:00 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:27:00 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v485: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  4 05:27:01 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:27:02 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v486: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  4 05:27:04 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v487: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  4 05:27:06 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:27:06 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v488: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Dec  4 05:27:08 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v489: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 39 op/s
Dec  4 05:27:10 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v490: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:27:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:27:12 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v491: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:27:14 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v492: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:27:16 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v493: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:27:18 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:27:18 np0005545273 kernel: SELinux:  Converting 2770 SID table entries...
Dec  4 05:27:18 np0005545273 kernel: SELinux:  policy capability network_peer_controls=1
Dec  4 05:27:18 np0005545273 kernel: SELinux:  policy capability open_perms=1
Dec  4 05:27:18 np0005545273 kernel: SELinux:  policy capability extended_socket_class=1
Dec  4 05:27:18 np0005545273 kernel: SELinux:  policy capability always_check_network=0
Dec  4 05:27:18 np0005545273 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  4 05:27:18 np0005545273 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  4 05:27:18 np0005545273 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  4 05:27:18 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v494: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:27:20 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v495: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:27:21 np0005545273 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Dec  4 05:27:22 np0005545273 podman[163503]: 2025-12-04 10:27:22.012853795 +0000 UTC m=+0.106343337 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  4 05:27:22 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v496: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:27:23 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:27:23 np0005545273 podman[163529]: 2025-12-04 10:27:23.96103632 +0000 UTC m=+0.060614530 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  4 05:27:24 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v497: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:27:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:27:26
Dec  4 05:27:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:27:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:27:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['volumes', 'backups', 'vms', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'images', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta']
Dec  4 05:27:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:27:26 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v498: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:27:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:27:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:27:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:27:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:27:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:27:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:27:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:27:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:27:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:27:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:27:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:27:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:27:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:27:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:27:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:27:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:27:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:27:28 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v499: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:27:30 np0005545273 kernel: SELinux:  Converting 2770 SID table entries...
Dec  4 05:27:30 np0005545273 kernel: SELinux:  policy capability network_peer_controls=1
Dec  4 05:27:30 np0005545273 kernel: SELinux:  policy capability open_perms=1
Dec  4 05:27:30 np0005545273 kernel: SELinux:  policy capability extended_socket_class=1
Dec  4 05:27:30 np0005545273 kernel: SELinux:  policy capability always_check_network=0
Dec  4 05:27:30 np0005545273 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  4 05:27:30 np0005545273 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  4 05:27:30 np0005545273 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  4 05:27:30 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v500: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:27:32 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v501: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:27:33 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:27:34 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v502: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:27:36 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v503: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:27:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:27:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:27:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:27:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:27:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:27:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:27:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:27:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:27:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:27:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:27:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:27:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:27:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec  4 05:27:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:27:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:27:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:27:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:27:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:27:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 05:27:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:27:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:27:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:27:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 05:27:38 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:27:38 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v504: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:27:40 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v505: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:27:42 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v506: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:27:43 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:27:44 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v507: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:27:46 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v508: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:27:48 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:27:48 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v509: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:27:50 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v510: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:27:52 np0005545273 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Dec  4 05:27:52 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v511: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:27:52 np0005545273 podman[170401]: 2025-12-04 10:27:52.994022458 +0000 UTC m=+0.091210587 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  4 05:27:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:27:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:27:54.892 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:27:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:27:54.893 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:27:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:27:54.893 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:27:54 np0005545273 podman[171844]: 2025-12-04 10:27:54.966054424 +0000 UTC m=+0.075382941 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  4 05:27:54 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v512: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:27:56 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v513: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:27:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:27:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:27:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:27:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:27:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:27:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:27:58 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:27:58 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v514: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:28:00 np0005545273 podman[175323]: 2025-12-04 10:28:00.16239167 +0000 UTC m=+0.341465788 container exec 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  4 05:28:00 np0005545273 podman[175659]: 2025-12-04 10:28:00.348372656 +0000 UTC m=+0.080325427 container exec_died 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:28:00 np0005545273 podman[175323]: 2025-12-04 10:28:00.355599372 +0000 UTC m=+0.534673490 container exec_died 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec  4 05:28:00 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v515: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:28:01 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:28:01 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:28:01 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:28:01 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:28:01 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:28:01 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:28:01 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:28:01 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:28:01 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:28:01 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:28:01 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:28:01 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:28:01 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:28:01 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:28:01 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:28:01 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:28:02 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:28:02 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:28:02 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:28:02 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:28:02 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:28:02 np0005545273 podman[177237]: 2025-12-04 10:28:02.157762955 +0000 UTC m=+0.048582103 container create a4567ecfbad48268af0b1e29470641c95d4ef0c9e0f6b8877955142343fa6b70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec  4 05:28:02 np0005545273 systemd[1]: Started libpod-conmon-a4567ecfbad48268af0b1e29470641c95d4ef0c9e0f6b8877955142343fa6b70.scope.
Dec  4 05:28:02 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:28:02 np0005545273 podman[177237]: 2025-12-04 10:28:02.139604476 +0000 UTC m=+0.030423624 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:28:02 np0005545273 podman[177237]: 2025-12-04 10:28:02.247405676 +0000 UTC m=+0.138224894 container init a4567ecfbad48268af0b1e29470641c95d4ef0c9e0f6b8877955142343fa6b70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shtern, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:28:02 np0005545273 podman[177237]: 2025-12-04 10:28:02.256178118 +0000 UTC m=+0.146997256 container start a4567ecfbad48268af0b1e29470641c95d4ef0c9e0f6b8877955142343fa6b70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shtern, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:28:02 np0005545273 gifted_shtern[177326]: 167 167
Dec  4 05:28:02 np0005545273 systemd[1]: libpod-a4567ecfbad48268af0b1e29470641c95d4ef0c9e0f6b8877955142343fa6b70.scope: Deactivated successfully.
Dec  4 05:28:02 np0005545273 podman[177237]: 2025-12-04 10:28:02.261029731 +0000 UTC m=+0.151848869 container attach a4567ecfbad48268af0b1e29470641c95d4ef0c9e0f6b8877955142343fa6b70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:28:02 np0005545273 podman[177237]: 2025-12-04 10:28:02.261733656 +0000 UTC m=+0.152552834 container died a4567ecfbad48268af0b1e29470641c95d4ef0c9e0f6b8877955142343fa6b70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shtern, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:28:02 np0005545273 systemd[1]: var-lib-containers-storage-overlay-2e8c71c9277bad14dbfd95852d5d39af2013f260a06279a07eb6360bcce488ed-merged.mount: Deactivated successfully.
Dec  4 05:28:02 np0005545273 podman[177237]: 2025-12-04 10:28:02.312777046 +0000 UTC m=+0.203596224 container remove a4567ecfbad48268af0b1e29470641c95d4ef0c9e0f6b8877955142343fa6b70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shtern, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:28:02 np0005545273 systemd[1]: libpod-conmon-a4567ecfbad48268af0b1e29470641c95d4ef0c9e0f6b8877955142343fa6b70.scope: Deactivated successfully.
Dec  4 05:28:02 np0005545273 podman[177500]: 2025-12-04 10:28:02.504177456 +0000 UTC m=+0.050183610 container create 4391dde81c0ac42400e3950b849ca2e726d8f3a45ddef973277fb13665bada9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_lumiere, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:28:02 np0005545273 systemd[1]: Started libpod-conmon-4391dde81c0ac42400e3950b849ca2e726d8f3a45ddef973277fb13665bada9e.scope.
Dec  4 05:28:02 np0005545273 podman[177500]: 2025-12-04 10:28:02.481138074 +0000 UTC m=+0.027144218 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:28:02 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:28:02 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60f6de1f2b1ff0e673f93fe986e507aa85d7807538046d0a2402366bad3d6ba8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:28:02 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60f6de1f2b1ff0e673f93fe986e507aa85d7807538046d0a2402366bad3d6ba8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:28:02 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60f6de1f2b1ff0e673f93fe986e507aa85d7807538046d0a2402366bad3d6ba8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:28:02 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60f6de1f2b1ff0e673f93fe986e507aa85d7807538046d0a2402366bad3d6ba8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:28:02 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60f6de1f2b1ff0e673f93fe986e507aa85d7807538046d0a2402366bad3d6ba8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:28:02 np0005545273 podman[177500]: 2025-12-04 10:28:02.597801538 +0000 UTC m=+0.143807662 container init 4391dde81c0ac42400e3950b849ca2e726d8f3a45ddef973277fb13665bada9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_lumiere, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:28:02 np0005545273 podman[177500]: 2025-12-04 10:28:02.607639045 +0000 UTC m=+0.153645169 container start 4391dde81c0ac42400e3950b849ca2e726d8f3a45ddef973277fb13665bada9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  4 05:28:02 np0005545273 podman[177500]: 2025-12-04 10:28:02.611647498 +0000 UTC m=+0.157653702 container attach 4391dde81c0ac42400e3950b849ca2e726d8f3a45ddef973277fb13665bada9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_lumiere, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec  4 05:28:02 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v516: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:28:03 np0005545273 dreamy_lumiere[177584]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:28:03 np0005545273 dreamy_lumiere[177584]: --> All data devices are unavailable
Dec  4 05:28:03 np0005545273 systemd[1]: libpod-4391dde81c0ac42400e3950b849ca2e726d8f3a45ddef973277fb13665bada9e.scope: Deactivated successfully.
Dec  4 05:28:03 np0005545273 podman[177500]: 2025-12-04 10:28:03.132559409 +0000 UTC m=+0.678565533 container died 4391dde81c0ac42400e3950b849ca2e726d8f3a45ddef973277fb13665bada9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec  4 05:28:03 np0005545273 systemd[1]: var-lib-containers-storage-overlay-60f6de1f2b1ff0e673f93fe986e507aa85d7807538046d0a2402366bad3d6ba8-merged.mount: Deactivated successfully.
Dec  4 05:28:03 np0005545273 podman[177500]: 2025-12-04 10:28:03.178086481 +0000 UTC m=+0.724092615 container remove 4391dde81c0ac42400e3950b849ca2e726d8f3a45ddef973277fb13665bada9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_lumiere, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:28:03 np0005545273 systemd[1]: libpod-conmon-4391dde81c0ac42400e3950b849ca2e726d8f3a45ddef973277fb13665bada9e.scope: Deactivated successfully.
Dec  4 05:28:03 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:28:03 np0005545273 podman[178380]: 2025-12-04 10:28:03.66211817 +0000 UTC m=+0.044598982 container create 9d422b350c0c7d7a8cd399263e9d3c5fbf00e3055f7d539cfa86072b6bf836cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_varahamihira, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  4 05:28:03 np0005545273 systemd[1]: Started libpod-conmon-9d422b350c0c7d7a8cd399263e9d3c5fbf00e3055f7d539cfa86072b6bf836cf.scope.
Dec  4 05:28:03 np0005545273 podman[178380]: 2025-12-04 10:28:03.642964938 +0000 UTC m=+0.025445770 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:28:03 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:28:03 np0005545273 podman[178380]: 2025-12-04 10:28:03.761309391 +0000 UTC m=+0.143790223 container init 9d422b350c0c7d7a8cd399263e9d3c5fbf00e3055f7d539cfa86072b6bf836cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:28:03 np0005545273 podman[178380]: 2025-12-04 10:28:03.768334643 +0000 UTC m=+0.150815455 container start 9d422b350c0c7d7a8cd399263e9d3c5fbf00e3055f7d539cfa86072b6bf836cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True)
Dec  4 05:28:03 np0005545273 podman[178380]: 2025-12-04 10:28:03.771601918 +0000 UTC m=+0.154082750 container attach 9d422b350c0c7d7a8cd399263e9d3c5fbf00e3055f7d539cfa86072b6bf836cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_varahamihira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Dec  4 05:28:03 np0005545273 sad_varahamihira[178465]: 167 167
Dec  4 05:28:03 np0005545273 systemd[1]: libpod-9d422b350c0c7d7a8cd399263e9d3c5fbf00e3055f7d539cfa86072b6bf836cf.scope: Deactivated successfully.
Dec  4 05:28:03 np0005545273 podman[178380]: 2025-12-04 10:28:03.773658236 +0000 UTC m=+0.156139048 container died 9d422b350c0c7d7a8cd399263e9d3c5fbf00e3055f7d539cfa86072b6bf836cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:28:03 np0005545273 systemd[1]: var-lib-containers-storage-overlay-1406b06520a36e1bd0b66c84a7c729a300af72ab8ee1314aed4b43a34c2e2f35-merged.mount: Deactivated successfully.
Dec  4 05:28:03 np0005545273 podman[178380]: 2025-12-04 10:28:03.818836389 +0000 UTC m=+0.201317191 container remove 9d422b350c0c7d7a8cd399263e9d3c5fbf00e3055f7d539cfa86072b6bf836cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_varahamihira, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:28:03 np0005545273 systemd[1]: libpod-conmon-9d422b350c0c7d7a8cd399263e9d3c5fbf00e3055f7d539cfa86072b6bf836cf.scope: Deactivated successfully.
Dec  4 05:28:04 np0005545273 podman[178638]: 2025-12-04 10:28:04.000403213 +0000 UTC m=+0.046074165 container create 874a8187b72e95b6f1a9f304b6771391da7a501d38e2181ea224d788adede42b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec  4 05:28:04 np0005545273 systemd[1]: Started libpod-conmon-874a8187b72e95b6f1a9f304b6771391da7a501d38e2181ea224d788adede42b.scope.
Dec  4 05:28:04 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:28:04 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c7e926e7207f014b9819717ad553b749d7558af8369470d34f21880a3fe112/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:28:04 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c7e926e7207f014b9819717ad553b749d7558af8369470d34f21880a3fe112/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:28:04 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c7e926e7207f014b9819717ad553b749d7558af8369470d34f21880a3fe112/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:28:04 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c7e926e7207f014b9819717ad553b749d7558af8369470d34f21880a3fe112/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:28:04 np0005545273 podman[178638]: 2025-12-04 10:28:04.073337857 +0000 UTC m=+0.119008819 container init 874a8187b72e95b6f1a9f304b6771391da7a501d38e2181ea224d788adede42b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_diffie, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  4 05:28:04 np0005545273 podman[178638]: 2025-12-04 10:28:03.981834395 +0000 UTC m=+0.027505367 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:28:04 np0005545273 podman[178638]: 2025-12-04 10:28:04.079041279 +0000 UTC m=+0.124712221 container start 874a8187b72e95b6f1a9f304b6771391da7a501d38e2181ea224d788adede42b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_diffie, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  4 05:28:04 np0005545273 podman[178638]: 2025-12-04 10:28:04.081992927 +0000 UTC m=+0.127664009 container attach 874a8187b72e95b6f1a9f304b6771391da7a501d38e2181ea224d788adede42b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_diffie, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]: {
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:    "0": [
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:        {
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            "devices": [
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "/dev/loop3"
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            ],
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            "lv_name": "ceph_lv0",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            "lv_size": "21470642176",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            "name": "ceph_lv0",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            "tags": {
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.cluster_name": "ceph",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.crush_device_class": "",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.encrypted": "0",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.objectstore": "bluestore",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.osd_id": "0",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.type": "block",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.vdo": "0",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.with_tpm": "0"
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            },
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            "type": "block",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            "vg_name": "ceph_vg0"
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:        }
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:    ],
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:    "1": [
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:        {
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            "devices": [
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "/dev/loop4"
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            ],
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            "lv_name": "ceph_lv1",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            "lv_size": "21470642176",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            "name": "ceph_lv1",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            "tags": {
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.cluster_name": "ceph",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.crush_device_class": "",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.encrypted": "0",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.objectstore": "bluestore",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.osd_id": "1",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.type": "block",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.vdo": "0",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.with_tpm": "0"
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            },
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            "type": "block",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            "vg_name": "ceph_vg1"
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:        }
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:    ],
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:    "2": [
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:        {
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            "devices": [
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "/dev/loop5"
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            ],
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            "lv_name": "ceph_lv2",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            "lv_size": "21470642176",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            "name": "ceph_lv2",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            "tags": {
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.cluster_name": "ceph",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.crush_device_class": "",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.encrypted": "0",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.objectstore": "bluestore",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.osd_id": "2",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.type": "block",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.vdo": "0",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:                "ceph.with_tpm": "0"
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            },
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            "type": "block",
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:            "vg_name": "ceph_vg2"
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:        }
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]:    ]
Dec  4 05:28:04 np0005545273 awesome_diffie[178714]: }
Dec  4 05:28:04 np0005545273 systemd[1]: libpod-874a8187b72e95b6f1a9f304b6771391da7a501d38e2181ea224d788adede42b.scope: Deactivated successfully.
Dec  4 05:28:04 np0005545273 podman[178638]: 2025-12-04 10:28:04.374930373 +0000 UTC m=+0.420601325 container died 874a8187b72e95b6f1a9f304b6771391da7a501d38e2181ea224d788adede42b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_diffie, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:28:04 np0005545273 systemd[1]: var-lib-containers-storage-overlay-70c7e926e7207f014b9819717ad553b749d7558af8369470d34f21880a3fe112-merged.mount: Deactivated successfully.
Dec  4 05:28:04 np0005545273 podman[178638]: 2025-12-04 10:28:04.418265434 +0000 UTC m=+0.463936386 container remove 874a8187b72e95b6f1a9f304b6771391da7a501d38e2181ea224d788adede42b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_diffie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:28:04 np0005545273 systemd[1]: libpod-conmon-874a8187b72e95b6f1a9f304b6771391da7a501d38e2181ea224d788adede42b.scope: Deactivated successfully.
Dec  4 05:28:04 np0005545273 podman[179366]: 2025-12-04 10:28:04.895561398 +0000 UTC m=+0.048085342 container create 1e9c8f9132c5fe84b8cf4142eefbdc33e3ae231c151ab8022abbbd74e1717701 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_liskov, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:28:04 np0005545273 systemd[1]: Started libpod-conmon-1e9c8f9132c5fe84b8cf4142eefbdc33e3ae231c151ab8022abbbd74e1717701.scope.
Dec  4 05:28:04 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:28:04 np0005545273 podman[179366]: 2025-12-04 10:28:04.872473325 +0000 UTC m=+0.024997279 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:28:04 np0005545273 podman[179366]: 2025-12-04 10:28:04.983524329 +0000 UTC m=+0.136048283 container init 1e9c8f9132c5fe84b8cf4142eefbdc33e3ae231c151ab8022abbbd74e1717701 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_liskov, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:28:04 np0005545273 podman[179366]: 2025-12-04 10:28:04.991426272 +0000 UTC m=+0.143950216 container start 1e9c8f9132c5fe84b8cf4142eefbdc33e3ae231c151ab8022abbbd74e1717701 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_liskov, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:28:04 np0005545273 podman[179366]: 2025-12-04 10:28:04.994642466 +0000 UTC m=+0.147166410 container attach 1e9c8f9132c5fe84b8cf4142eefbdc33e3ae231c151ab8022abbbd74e1717701 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_liskov, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec  4 05:28:04 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v517: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:28:04 np0005545273 gifted_liskov[179438]: 167 167
Dec  4 05:28:04 np0005545273 systemd[1]: libpod-1e9c8f9132c5fe84b8cf4142eefbdc33e3ae231c151ab8022abbbd74e1717701.scope: Deactivated successfully.
Dec  4 05:28:04 np0005545273 podman[179366]: 2025-12-04 10:28:04.99739264 +0000 UTC m=+0.149916574 container died 1e9c8f9132c5fe84b8cf4142eefbdc33e3ae231c151ab8022abbbd74e1717701 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_liskov, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  4 05:28:05 np0005545273 systemd[1]: var-lib-containers-storage-overlay-a8ac52727d26336cff703553edf47760419b5db827753a317d29abf90bf64196-merged.mount: Deactivated successfully.
Dec  4 05:28:05 np0005545273 podman[179366]: 2025-12-04 10:28:05.03074511 +0000 UTC m=+0.183269054 container remove 1e9c8f9132c5fe84b8cf4142eefbdc33e3ae231c151ab8022abbbd74e1717701 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec  4 05:28:05 np0005545273 systemd[1]: libpod-conmon-1e9c8f9132c5fe84b8cf4142eefbdc33e3ae231c151ab8022abbbd74e1717701.scope: Deactivated successfully.
Dec  4 05:28:05 np0005545273 podman[179599]: 2025-12-04 10:28:05.188236937 +0000 UTC m=+0.039734228 container create e2ad62b6f57ab9b3893ac02aa5d716a9b2f77ae90bcbd7dbe0fa22cbde379eba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  4 05:28:05 np0005545273 systemd[1]: Started libpod-conmon-e2ad62b6f57ab9b3893ac02aa5d716a9b2f77ae90bcbd7dbe0fa22cbde379eba.scope.
Dec  4 05:28:05 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:28:05 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c903fbcd8884f47946c4a14e133e5256f70e93a365e127781f6a57ebafc83e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:28:05 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c903fbcd8884f47946c4a14e133e5256f70e93a365e127781f6a57ebafc83e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:28:05 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c903fbcd8884f47946c4a14e133e5256f70e93a365e127781f6a57ebafc83e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:28:05 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c903fbcd8884f47946c4a14e133e5256f70e93a365e127781f6a57ebafc83e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:28:05 np0005545273 podman[179599]: 2025-12-04 10:28:05.171226964 +0000 UTC m=+0.022724275 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:28:05 np0005545273 podman[179599]: 2025-12-04 10:28:05.272589826 +0000 UTC m=+0.124087137 container init e2ad62b6f57ab9b3893ac02aa5d716a9b2f77ae90bcbd7dbe0fa22cbde379eba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_noyce, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:28:05 np0005545273 podman[179599]: 2025-12-04 10:28:05.278778839 +0000 UTC m=+0.130276130 container start e2ad62b6f57ab9b3893ac02aa5d716a9b2f77ae90bcbd7dbe0fa22cbde379eba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_noyce, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:28:05 np0005545273 podman[179599]: 2025-12-04 10:28:05.282066024 +0000 UTC m=+0.133563325 container attach e2ad62b6f57ab9b3893ac02aa5d716a9b2f77ae90bcbd7dbe0fa22cbde379eba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec  4 05:28:05 np0005545273 lvm[180285]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:28:06 np0005545273 lvm[180285]: VG ceph_vg0 finished
Dec  4 05:28:06 np0005545273 lvm[180296]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:28:06 np0005545273 lvm[180296]: VG ceph_vg2 finished
Dec  4 05:28:06 np0005545273 lvm[180288]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:28:06 np0005545273 lvm[180288]: VG ceph_vg1 finished
Dec  4 05:28:06 np0005545273 admiring_noyce[179673]: {}
Dec  4 05:28:06 np0005545273 systemd[1]: libpod-e2ad62b6f57ab9b3893ac02aa5d716a9b2f77ae90bcbd7dbe0fa22cbde379eba.scope: Deactivated successfully.
Dec  4 05:28:06 np0005545273 systemd[1]: libpod-e2ad62b6f57ab9b3893ac02aa5d716a9b2f77ae90bcbd7dbe0fa22cbde379eba.scope: Consumed 1.328s CPU time.
Dec  4 05:28:06 np0005545273 podman[179599]: 2025-12-04 10:28:06.128909663 +0000 UTC m=+0.980406974 container died e2ad62b6f57ab9b3893ac02aa5d716a9b2f77ae90bcbd7dbe0fa22cbde379eba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_noyce, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:28:06 np0005545273 systemd[1]: var-lib-containers-storage-overlay-4c903fbcd8884f47946c4a14e133e5256f70e93a365e127781f6a57ebafc83e9-merged.mount: Deactivated successfully.
Dec  4 05:28:06 np0005545273 podman[179599]: 2025-12-04 10:28:06.311359217 +0000 UTC m=+1.162856528 container remove e2ad62b6f57ab9b3893ac02aa5d716a9b2f77ae90bcbd7dbe0fa22cbde379eba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_noyce, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:28:06 np0005545273 systemd[1]: libpod-conmon-e2ad62b6f57ab9b3893ac02aa5d716a9b2f77ae90bcbd7dbe0fa22cbde379eba.scope: Deactivated successfully.
Dec  4 05:28:06 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:28:06 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:28:06 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:28:06 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:28:06 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v518: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:28:07 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:28:07 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:28:08 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:28:08 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v519: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:28:10 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v520: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:28:12 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v521: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:28:13 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:28:15 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v522: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:28:17 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v523: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:28:18 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:28:19 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v524: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:28:21 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v525: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:28:21 np0005545273 kernel: SELinux:  Converting 2771 SID table entries...
Dec  4 05:28:21 np0005545273 kernel: SELinux:  policy capability network_peer_controls=1
Dec  4 05:28:21 np0005545273 kernel: SELinux:  policy capability open_perms=1
Dec  4 05:28:21 np0005545273 kernel: SELinux:  policy capability extended_socket_class=1
Dec  4 05:28:21 np0005545273 kernel: SELinux:  policy capability always_check_network=0
Dec  4 05:28:21 np0005545273 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  4 05:28:21 np0005545273 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  4 05:28:21 np0005545273 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  4 05:28:22 np0005545273 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Dec  4 05:28:22 np0005545273 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Dec  4 05:28:22 np0005545273 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Dec  4 05:28:23 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v526: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:28:23 np0005545273 podman[181343]: 2025-12-04 10:28:23.129466777 +0000 UTC m=+0.091832764 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  4 05:28:23 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:28:25 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v527: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:28:25 np0005545273 podman[181406]: 2025-12-04 10:28:25.092792776 +0000 UTC m=+0.074309176 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  4 05:28:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:28:26
Dec  4 05:28:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:28:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:28:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'vms', 'default.rgw.control', 'backups', 'cephfs.cephfs.data']
Dec  4 05:28:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:28:27 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v528: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:28:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:28:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:28:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:28:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:28:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:28:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:28:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:28:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:28:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:28:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:28:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:28:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:28:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:28:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:28:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:28:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:28:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:28:29 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v529: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:28:29 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Dec  4 05:28:29 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:28:29.592071) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  4 05:28:29 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Dec  4 05:28:29 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844109592262, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2040, "num_deletes": 251, "total_data_size": 3573302, "memory_usage": 3627680, "flush_reason": "Manual Compaction"}
Dec  4 05:28:29 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Dec  4 05:28:29 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844109622089, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3486142, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9754, "largest_seqno": 11793, "table_properties": {"data_size": 3476863, "index_size": 5901, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17795, "raw_average_key_size": 19, "raw_value_size": 3458498, "raw_average_value_size": 3779, "num_data_blocks": 267, "num_entries": 915, "num_filter_entries": 915, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843875, "oldest_key_time": 1764843875, "file_creation_time": 1764844109, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:28:29 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 30126 microseconds, and 10417 cpu microseconds.
Dec  4 05:28:29 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  4 05:28:29 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:28:29.622234) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3486142 bytes OK
Dec  4 05:28:29 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:28:29.622283) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Dec  4 05:28:29 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:28:29.624024) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Dec  4 05:28:29 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:28:29.624053) EVENT_LOG_v1 {"time_micros": 1764844109624045, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  4 05:28:29 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:28:29.624087) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  4 05:28:29 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3564795, prev total WAL file size 3564795, number of live WAL files 2.
Dec  4 05:28:29 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:28:29 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:28:29.625848) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Dec  4 05:28:29 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  4 05:28:29 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3404KB)], [26(6084KB)]
Dec  4 05:28:29 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844109625955, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9716794, "oldest_snapshot_seqno": -1}
Dec  4 05:28:29 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3747 keys, 8056179 bytes, temperature: kUnknown
Dec  4 05:28:29 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844109677396, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8056179, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8027637, "index_size": 18064, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9413, "raw_key_size": 90062, "raw_average_key_size": 24, "raw_value_size": 7956529, "raw_average_value_size": 2123, "num_data_blocks": 782, "num_entries": 3747, "num_filter_entries": 3747, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 1764844109, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:28:29 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  4 05:28:29 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:28:29.677691) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8056179 bytes
Dec  4 05:28:29 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:28:29.679324) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 188.6 rd, 156.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 5.9 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(5.1) write-amplify(2.3) OK, records in: 4261, records dropped: 514 output_compression: NoCompression
Dec  4 05:28:29 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:28:29.679343) EVENT_LOG_v1 {"time_micros": 1764844109679332, "job": 10, "event": "compaction_finished", "compaction_time_micros": 51524, "compaction_time_cpu_micros": 17495, "output_level": 6, "num_output_files": 1, "total_output_size": 8056179, "num_input_records": 4261, "num_output_records": 3747, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  4 05:28:29 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:28:29 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844109679983, "job": 10, "event": "table_file_deletion", "file_number": 28}
Dec  4 05:28:29 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:28:29 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844109680896, "job": 10, "event": "table_file_deletion", "file_number": 26}
Dec  4 05:28:29 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:28:29.625683) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:28:29 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:28:29.680946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:28:29 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:28:29.680952) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:28:29 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:28:29.680954) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:28:29 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:28:29.680955) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:28:29 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:28:29.680957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:28:31 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v530: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:28:32 np0005545273 systemd[1]: Stopping OpenSSH server daemon...
Dec  4 05:28:32 np0005545273 systemd[1]: sshd.service: Deactivated successfully.
Dec  4 05:28:32 np0005545273 systemd[1]: Stopped OpenSSH server daemon.
Dec  4 05:28:32 np0005545273 systemd[1]: sshd.service: Consumed 9.385s CPU time, read 32.0K from disk, written 224.0K to disk.
Dec  4 05:28:32 np0005545273 systemd[1]: Stopped target sshd-keygen.target.
Dec  4 05:28:32 np0005545273 systemd[1]: Stopping sshd-keygen.target...
Dec  4 05:28:32 np0005545273 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  4 05:28:32 np0005545273 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  4 05:28:32 np0005545273 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  4 05:28:32 np0005545273 systemd[1]: Reached target sshd-keygen.target.
Dec  4 05:28:32 np0005545273 systemd[1]: Starting OpenSSH server daemon...
Dec  4 05:28:32 np0005545273 systemd[1]: Started OpenSSH server daemon.
Dec  4 05:28:33 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v531: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:28:33 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:28:34 np0005545273 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  4 05:28:34 np0005545273 systemd[1]: Starting man-db-cache-update.service...
Dec  4 05:28:34 np0005545273 systemd[1]: Reloading.
Dec  4 05:28:34 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:28:34 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:28:34 np0005545273 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  4 05:28:35 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v532: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:28:37 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v533: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:28:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:28:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:28:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:28:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:28:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:28:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:28:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:28:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:28:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:28:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:28:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:28:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:28:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec  4 05:28:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:28:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:28:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:28:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:28:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:28:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 05:28:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:28:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:28:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:28:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 05:28:38 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:28:38 np0005545273 python3.9[186957]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  4 05:28:38 np0005545273 systemd[1]: Reloading.
Dec  4 05:28:39 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v534: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:28:39 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:28:39 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:28:40 np0005545273 python3.9[188270]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  4 05:28:40 np0005545273 systemd[1]: Reloading.
Dec  4 05:28:40 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:28:40 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:28:41 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v535: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:28:41 np0005545273 python3.9[189445]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  4 05:28:41 np0005545273 systemd[1]: Reloading.
Dec  4 05:28:41 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:28:41 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:28:42 np0005545273 python3.9[190768]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  4 05:28:42 np0005545273 systemd[1]: Reloading.
Dec  4 05:28:42 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:28:42 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:28:43 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v536: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:28:43 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:28:43 np0005545273 python3.9[191696]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  4 05:28:43 np0005545273 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  4 05:28:43 np0005545273 systemd[1]: Finished man-db-cache-update.service.
Dec  4 05:28:43 np0005545273 systemd[1]: man-db-cache-update.service: Consumed 11.440s CPU time.
Dec  4 05:28:43 np0005545273 systemd[1]: run-r7e894d4c01a54f8786903b4e3ac50d4e.service: Deactivated successfully.
Dec  4 05:28:43 np0005545273 systemd[1]: Reloading.
Dec  4 05:28:43 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:28:43 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:28:44 np0005545273 python3.9[191975]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  4 05:28:44 np0005545273 systemd[1]: Reloading.
Dec  4 05:28:45 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v537: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:28:45 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:28:45 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:28:46 np0005545273 python3.9[192165]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  4 05:28:46 np0005545273 systemd[1]: Reloading.
Dec  4 05:28:46 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:28:46 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:28:47 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v538: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:28:47 np0005545273 python3.9[192355]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  4 05:28:48 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:28:48 np0005545273 python3.9[192510]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  4 05:28:48 np0005545273 systemd[1]: Reloading.
Dec  4 05:28:49 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v539: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:28:49 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:28:49 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:28:50 np0005545273 python3.9[192700]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  4 05:28:50 np0005545273 systemd[1]: Reloading.
Dec  4 05:28:50 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:28:50 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:28:50 np0005545273 systemd[1]: Listening on libvirt proxy daemon socket.
Dec  4 05:28:50 np0005545273 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Dec  4 05:28:51 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v540: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:28:51 np0005545273 python3.9[192893]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  4 05:28:52 np0005545273 python3.9[193048]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  4 05:28:52 np0005545273 python3.9[193203]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  4 05:28:53 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v541: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:28:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:28:53 np0005545273 podman[193330]: 2025-12-04 10:28:53.613139009 +0000 UTC m=+0.109814075 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  4 05:28:53 np0005545273 python3.9[193359]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  4 05:28:54 np0005545273 python3.9[193540]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  4 05:28:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:28:54.893 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:28:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:28:54.894 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:28:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:28:54.894 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:28:55 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v542: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:28:55 np0005545273 python3.9[193695]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  4 05:28:55 np0005545273 podman[193697]: 2025-12-04 10:28:55.380792725 +0000 UTC m=+0.048462538 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  4 05:28:56 np0005545273 python3.9[193869]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  4 05:28:56 np0005545273 python3.9[194024]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  4 05:28:57 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v543: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:28:57 np0005545273 python3.9[194179]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  4 05:28:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:28:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:28:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:28:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:28:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:28:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:28:58 np0005545273 python3.9[194334]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  4 05:28:58 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:28:59 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v544: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:28:59 np0005545273 python3.9[194489]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  4 05:28:59 np0005545273 python3.9[194644]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  4 05:29:00 np0005545273 python3.9[194799]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  4 05:29:01 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v545: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:29:01 np0005545273 python3.9[194954]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  4 05:29:02 np0005545273 python3.9[195109]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:29:03 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v546: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:29:03 np0005545273 python3.9[195261]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:29:03 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:29:03 np0005545273 python3.9[195413]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:29:04 np0005545273 python3.9[195565]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:29:05 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v547: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:29:05 np0005545273 python3.9[195717]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:29:05 np0005545273 python3.9[195871]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:29:06 np0005545273 python3.9[196023]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:29:07 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v548: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:29:07 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:29:07 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:29:07 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:29:07 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:29:07 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:29:07 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:29:07 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:29:07 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:29:07 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:29:07 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:29:07 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:29:07 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:29:07 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:29:07 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:29:07 np0005545273 python3.9[196271]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764844145.9168942-554-269859166839638/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:07 np0005545273 podman[196313]: 2025-12-04 10:29:07.492509232 +0000 UTC m=+0.042557315 container create c7b9e1aa1a67400a3154f73240cde1488f430e4e98a3a647e76f95f1a69fecda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:29:07 np0005545273 systemd[1]: Started libpod-conmon-c7b9e1aa1a67400a3154f73240cde1488f430e4e98a3a647e76f95f1a69fecda.scope.
Dec  4 05:29:07 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:29:07 np0005545273 podman[196313]: 2025-12-04 10:29:07.472718215 +0000 UTC m=+0.022766328 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:29:07 np0005545273 podman[196313]: 2025-12-04 10:29:07.5783949 +0000 UTC m=+0.128443003 container init c7b9e1aa1a67400a3154f73240cde1488f430e4e98a3a647e76f95f1a69fecda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec  4 05:29:07 np0005545273 podman[196313]: 2025-12-04 10:29:07.587115298 +0000 UTC m=+0.137163381 container start c7b9e1aa1a67400a3154f73240cde1488f430e4e98a3a647e76f95f1a69fecda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec  4 05:29:07 np0005545273 podman[196313]: 2025-12-04 10:29:07.590744394 +0000 UTC m=+0.140792497 container attach c7b9e1aa1a67400a3154f73240cde1488f430e4e98a3a647e76f95f1a69fecda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  4 05:29:07 np0005545273 exciting_hamilton[196356]: 167 167
Dec  4 05:29:07 np0005545273 systemd[1]: libpod-c7b9e1aa1a67400a3154f73240cde1488f430e4e98a3a647e76f95f1a69fecda.scope: Deactivated successfully.
Dec  4 05:29:07 np0005545273 conmon[196356]: conmon c7b9e1aa1a67400a3154 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c7b9e1aa1a67400a3154f73240cde1488f430e4e98a3a647e76f95f1a69fecda.scope/container/memory.events
Dec  4 05:29:07 np0005545273 podman[196313]: 2025-12-04 10:29:07.60668299 +0000 UTC m=+0.156731103 container died c7b9e1aa1a67400a3154f73240cde1488f430e4e98a3a647e76f95f1a69fecda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hamilton, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:29:07 np0005545273 systemd[1]: var-lib-containers-storage-overlay-5b5b0195bbc257969c325dfdb0893f64615da488a4a79de8028075c9e08c7481-merged.mount: Deactivated successfully.
Dec  4 05:29:07 np0005545273 podman[196313]: 2025-12-04 10:29:07.652441828 +0000 UTC m=+0.202489911 container remove c7b9e1aa1a67400a3154f73240cde1488f430e4e98a3a647e76f95f1a69fecda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hamilton, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:29:07 np0005545273 systemd[1]: libpod-conmon-c7b9e1aa1a67400a3154f73240cde1488f430e4e98a3a647e76f95f1a69fecda.scope: Deactivated successfully.
Dec  4 05:29:07 np0005545273 podman[196455]: 2025-12-04 10:29:07.819373337 +0000 UTC m=+0.045308186 container create 27afa0e007b2d4a3c596d73b0149e1599a7e65a56a97fdc6fecd1b7f05476902 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_diffie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  4 05:29:07 np0005545273 systemd[1]: Started libpod-conmon-27afa0e007b2d4a3c596d73b0149e1599a7e65a56a97fdc6fecd1b7f05476902.scope.
Dec  4 05:29:07 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:29:07 np0005545273 podman[196455]: 2025-12-04 10:29:07.799882167 +0000 UTC m=+0.025817016 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:29:07 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37f292e041eca68a4a35e85417f3adccad7cc0755ba38e39f19d69b25b4d94f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:29:07 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37f292e041eca68a4a35e85417f3adccad7cc0755ba38e39f19d69b25b4d94f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:29:07 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37f292e041eca68a4a35e85417f3adccad7cc0755ba38e39f19d69b25b4d94f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:29:07 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37f292e041eca68a4a35e85417f3adccad7cc0755ba38e39f19d69b25b4d94f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:29:07 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37f292e041eca68a4a35e85417f3adccad7cc0755ba38e39f19d69b25b4d94f6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:29:07 np0005545273 podman[196455]: 2025-12-04 10:29:07.924080118 +0000 UTC m=+0.150014987 container init 27afa0e007b2d4a3c596d73b0149e1599a7e65a56a97fdc6fecd1b7f05476902 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_diffie, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0)
Dec  4 05:29:07 np0005545273 podman[196455]: 2025-12-04 10:29:07.933179167 +0000 UTC m=+0.159114006 container start 27afa0e007b2d4a3c596d73b0149e1599a7e65a56a97fdc6fecd1b7f05476902 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:29:07 np0005545273 podman[196455]: 2025-12-04 10:29:07.937626693 +0000 UTC m=+0.163561542 container attach 27afa0e007b2d4a3c596d73b0149e1599a7e65a56a97fdc6fecd1b7f05476902 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_diffie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec  4 05:29:08 np0005545273 python3.9[196498]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:29:08 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:29:08 np0005545273 relaxed_diffie[196501]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:29:08 np0005545273 relaxed_diffie[196501]: --> All data devices are unavailable
Dec  4 05:29:08 np0005545273 systemd[1]: libpod-27afa0e007b2d4a3c596d73b0149e1599a7e65a56a97fdc6fecd1b7f05476902.scope: Deactivated successfully.
Dec  4 05:29:08 np0005545273 conmon[196501]: conmon 27afa0e007b2d4a3c596 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-27afa0e007b2d4a3c596d73b0149e1599a7e65a56a97fdc6fecd1b7f05476902.scope/container/memory.events
Dec  4 05:29:08 np0005545273 podman[196455]: 2025-12-04 10:29:08.48118987 +0000 UTC m=+0.707124709 container died 27afa0e007b2d4a3c596d73b0149e1599a7e65a56a97fdc6fecd1b7f05476902 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  4 05:29:08 np0005545273 systemd[1]: var-lib-containers-storage-overlay-37f292e041eca68a4a35e85417f3adccad7cc0755ba38e39f19d69b25b4d94f6-merged.mount: Deactivated successfully.
Dec  4 05:29:08 np0005545273 podman[196455]: 2025-12-04 10:29:08.536388665 +0000 UTC m=+0.762323504 container remove 27afa0e007b2d4a3c596d73b0149e1599a7e65a56a97fdc6fecd1b7f05476902 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Dec  4 05:29:08 np0005545273 systemd[1]: libpod-conmon-27afa0e007b2d4a3c596d73b0149e1599a7e65a56a97fdc6fecd1b7f05476902.scope: Deactivated successfully.
Dec  4 05:29:08 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:29:08 np0005545273 python3.9[196642]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764844147.51163-554-236402673375619/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:08 np0005545273 podman[196843]: 2025-12-04 10:29:08.997812273 +0000 UTC m=+0.039725801 container create 4089aeca5d6274d094c164d739b6d1882585e8d54dd1c786e31493807534b748 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_black, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:29:09 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v549: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:29:09 np0005545273 systemd[1]: Started libpod-conmon-4089aeca5d6274d094c164d739b6d1882585e8d54dd1c786e31493807534b748.scope.
Dec  4 05:29:09 np0005545273 podman[196843]: 2025-12-04 10:29:08.980982962 +0000 UTC m=+0.022896510 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:29:09 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:29:09 np0005545273 podman[196843]: 2025-12-04 10:29:09.119938229 +0000 UTC m=+0.161851807 container init 4089aeca5d6274d094c164d739b6d1882585e8d54dd1c786e31493807534b748 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_black, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:29:09 np0005545273 podman[196843]: 2025-12-04 10:29:09.12761489 +0000 UTC m=+0.169528428 container start 4089aeca5d6274d094c164d739b6d1882585e8d54dd1c786e31493807534b748 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_black, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec  4 05:29:09 np0005545273 podman[196843]: 2025-12-04 10:29:09.131118202 +0000 UTC m=+0.173031800 container attach 4089aeca5d6274d094c164d739b6d1882585e8d54dd1c786e31493807534b748 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec  4 05:29:09 np0005545273 priceless_black[196885]: 167 167
Dec  4 05:29:09 np0005545273 systemd[1]: libpod-4089aeca5d6274d094c164d739b6d1882585e8d54dd1c786e31493807534b748.scope: Deactivated successfully.
Dec  4 05:29:09 np0005545273 conmon[196885]: conmon 4089aeca5d6274d094c1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4089aeca5d6274d094c164d739b6d1882585e8d54dd1c786e31493807534b748.scope/container/memory.events
Dec  4 05:29:09 np0005545273 podman[196843]: 2025-12-04 10:29:09.135063735 +0000 UTC m=+0.176977283 container died 4089aeca5d6274d094c164d739b6d1882585e8d54dd1c786e31493807534b748 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec  4 05:29:09 np0005545273 systemd[1]: var-lib-containers-storage-overlay-28228f99a6a9af619cb1a859b8820e6f6403065e69b4e172e0e145c807923a72-merged.mount: Deactivated successfully.
Dec  4 05:29:09 np0005545273 podman[196843]: 2025-12-04 10:29:09.185201598 +0000 UTC m=+0.227115156 container remove 4089aeca5d6274d094c164d739b6d1882585e8d54dd1c786e31493807534b748 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_black, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:29:09 np0005545273 systemd[1]: libpod-conmon-4089aeca5d6274d094c164d739b6d1882585e8d54dd1c786e31493807534b748.scope: Deactivated successfully.
Dec  4 05:29:09 np0005545273 python3.9[196890]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:29:09 np0005545273 podman[196912]: 2025-12-04 10:29:09.367775246 +0000 UTC m=+0.044297551 container create 2010dbf25281842c3153b748509810720b11225ede3447a9f335a2760602c664 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:29:09 np0005545273 systemd[1]: Started libpod-conmon-2010dbf25281842c3153b748509810720b11225ede3447a9f335a2760602c664.scope.
Dec  4 05:29:09 np0005545273 podman[196912]: 2025-12-04 10:29:09.346014287 +0000 UTC m=+0.022536612 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:29:09 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:29:09 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b65fe78db2d8c467b5fa387a14a143026616f6879f1b069f9ced08f203cbc2a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:29:09 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b65fe78db2d8c467b5fa387a14a143026616f6879f1b069f9ced08f203cbc2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:29:09 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b65fe78db2d8c467b5fa387a14a143026616f6879f1b069f9ced08f203cbc2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:29:09 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b65fe78db2d8c467b5fa387a14a143026616f6879f1b069f9ced08f203cbc2a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:29:09 np0005545273 podman[196912]: 2025-12-04 10:29:09.464409775 +0000 UTC m=+0.140932110 container init 2010dbf25281842c3153b748509810720b11225ede3447a9f335a2760602c664 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:29:09 np0005545273 podman[196912]: 2025-12-04 10:29:09.473141554 +0000 UTC m=+0.149663869 container start 2010dbf25281842c3153b748509810720b11225ede3447a9f335a2760602c664 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec  4 05:29:09 np0005545273 podman[196912]: 2025-12-04 10:29:09.476299947 +0000 UTC m=+0.152822252 container attach 2010dbf25281842c3153b748509810720b11225ede3447a9f335a2760602c664 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_knuth, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:29:09 np0005545273 competent_knuth[196930]: {
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:    "0": [
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:        {
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            "devices": [
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "/dev/loop3"
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            ],
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            "lv_name": "ceph_lv0",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            "lv_size": "21470642176",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            "name": "ceph_lv0",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            "tags": {
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.cluster_name": "ceph",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.crush_device_class": "",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.encrypted": "0",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.objectstore": "bluestore",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.osd_id": "0",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.type": "block",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.vdo": "0",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.with_tpm": "0"
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            },
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            "type": "block",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            "vg_name": "ceph_vg0"
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:        }
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:    ],
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:    "1": [
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:        {
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            "devices": [
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "/dev/loop4"
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            ],
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            "lv_name": "ceph_lv1",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            "lv_size": "21470642176",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            "name": "ceph_lv1",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            "tags": {
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.cluster_name": "ceph",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.crush_device_class": "",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.encrypted": "0",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.objectstore": "bluestore",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.osd_id": "1",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.type": "block",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.vdo": "0",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.with_tpm": "0"
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            },
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            "type": "block",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            "vg_name": "ceph_vg1"
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:        }
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:    ],
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:    "2": [
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:        {
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            "devices": [
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "/dev/loop5"
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            ],
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            "lv_name": "ceph_lv2",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            "lv_size": "21470642176",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            "name": "ceph_lv2",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            "tags": {
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.cluster_name": "ceph",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.crush_device_class": "",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.encrypted": "0",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.objectstore": "bluestore",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.osd_id": "2",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.type": "block",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.vdo": "0",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:                "ceph.with_tpm": "0"
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            },
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            "type": "block",
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:            "vg_name": "ceph_vg2"
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:        }
Dec  4 05:29:09 np0005545273 competent_knuth[196930]:    ]
Dec  4 05:29:09 np0005545273 competent_knuth[196930]: }
Dec  4 05:29:09 np0005545273 systemd[1]: libpod-2010dbf25281842c3153b748509810720b11225ede3447a9f335a2760602c664.scope: Deactivated successfully.
Dec  4 05:29:09 np0005545273 podman[196912]: 2025-12-04 10:29:09.804724563 +0000 UTC m=+0.481246888 container died 2010dbf25281842c3153b748509810720b11225ede3447a9f335a2760602c664 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_knuth, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  4 05:29:09 np0005545273 systemd[1]: var-lib-containers-storage-overlay-0b65fe78db2d8c467b5fa387a14a143026616f6879f1b069f9ced08f203cbc2a-merged.mount: Deactivated successfully.
Dec  4 05:29:09 np0005545273 podman[196912]: 2025-12-04 10:29:09.859170478 +0000 UTC m=+0.535692783 container remove 2010dbf25281842c3153b748509810720b11225ede3447a9f335a2760602c664 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_knuth, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  4 05:29:09 np0005545273 systemd[1]: libpod-conmon-2010dbf25281842c3153b748509810720b11225ede3447a9f335a2760602c664.scope: Deactivated successfully.
Dec  4 05:29:09 np0005545273 python3.9[197061]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764844148.7507305-554-20831354211948/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:10 np0005545273 podman[197237]: 2025-12-04 10:29:10.324420425 +0000 UTC m=+0.039623517 container create 154b04df6ed7b5e6393afc144bb3bc0039b74861b148141f35a81d53413926be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_visvesvaraya, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  4 05:29:10 np0005545273 systemd[1]: Started libpod-conmon-154b04df6ed7b5e6393afc144bb3bc0039b74861b148141f35a81d53413926be.scope.
Dec  4 05:29:10 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:29:10 np0005545273 podman[197237]: 2025-12-04 10:29:10.307915344 +0000 UTC m=+0.023118456 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:29:10 np0005545273 podman[197237]: 2025-12-04 10:29:10.407301146 +0000 UTC m=+0.122504258 container init 154b04df6ed7b5e6393afc144bb3bc0039b74861b148141f35a81d53413926be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_visvesvaraya, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec  4 05:29:10 np0005545273 podman[197237]: 2025-12-04 10:29:10.413929678 +0000 UTC m=+0.129132770 container start 154b04df6ed7b5e6393afc144bb3bc0039b74861b148141f35a81d53413926be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_visvesvaraya, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:29:10 np0005545273 podman[197237]: 2025-12-04 10:29:10.417450971 +0000 UTC m=+0.132654083 container attach 154b04df6ed7b5e6393afc144bb3bc0039b74861b148141f35a81d53413926be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_visvesvaraya, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  4 05:29:10 np0005545273 great_visvesvaraya[197287]: 167 167
Dec  4 05:29:10 np0005545273 systemd[1]: libpod-154b04df6ed7b5e6393afc144bb3bc0039b74861b148141f35a81d53413926be.scope: Deactivated successfully.
Dec  4 05:29:10 np0005545273 podman[197237]: 2025-12-04 10:29:10.420846869 +0000 UTC m=+0.136049961 container died 154b04df6ed7b5e6393afc144bb3bc0039b74861b148141f35a81d53413926be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_visvesvaraya, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:29:10 np0005545273 systemd[1]: var-lib-containers-storage-overlay-23cbbbcfaca470dcd37bb163d1de646685a3025f93e810f596b38f2cf5dcf23c-merged.mount: Deactivated successfully.
Dec  4 05:29:10 np0005545273 podman[197237]: 2025-12-04 10:29:10.458759052 +0000 UTC m=+0.173962144 container remove 154b04df6ed7b5e6393afc144bb3bc0039b74861b148141f35a81d53413926be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  4 05:29:10 np0005545273 systemd[1]: libpod-conmon-154b04df6ed7b5e6393afc144bb3bc0039b74861b148141f35a81d53413926be.scope: Deactivated successfully.
Dec  4 05:29:10 np0005545273 python3.9[197308]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:29:10 np0005545273 podman[197330]: 2025-12-04 10:29:10.633835004 +0000 UTC m=+0.050894143 container create 36f5d3595f3d31d243cfc51838a42f709fb4aa0fc987d73b509bcf868bcda9b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec  4 05:29:10 np0005545273 systemd[1]: Started libpod-conmon-36f5d3595f3d31d243cfc51838a42f709fb4aa0fc987d73b509bcf868bcda9b8.scope.
Dec  4 05:29:10 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:29:10 np0005545273 podman[197330]: 2025-12-04 10:29:10.613578794 +0000 UTC m=+0.030637953 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:29:10 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/889ee54a9e384495cc173935a06dcf8b7255b347757bd5760cf623ba2ab63815/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:29:10 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/889ee54a9e384495cc173935a06dcf8b7255b347757bd5760cf623ba2ab63815/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:29:10 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/889ee54a9e384495cc173935a06dcf8b7255b347757bd5760cf623ba2ab63815/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:29:10 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/889ee54a9e384495cc173935a06dcf8b7255b347757bd5760cf623ba2ab63815/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:29:10 np0005545273 podman[197330]: 2025-12-04 10:29:10.721695294 +0000 UTC m=+0.138754453 container init 36f5d3595f3d31d243cfc51838a42f709fb4aa0fc987d73b509bcf868bcda9b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_shamir, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec  4 05:29:10 np0005545273 podman[197330]: 2025-12-04 10:29:10.732036784 +0000 UTC m=+0.149095923 container start 36f5d3595f3d31d243cfc51838a42f709fb4aa0fc987d73b509bcf868bcda9b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_shamir, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  4 05:29:10 np0005545273 podman[197330]: 2025-12-04 10:29:10.741841621 +0000 UTC m=+0.158900760 container attach 36f5d3595f3d31d243cfc51838a42f709fb4aa0fc987d73b509bcf868bcda9b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:29:11 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v550: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:29:11 np0005545273 python3.9[197485]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764844150.1244535-554-105992836077490/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:11 np0005545273 lvm[197631]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:29:11 np0005545273 lvm[197630]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:29:11 np0005545273 lvm[197630]: VG ceph_vg1 finished
Dec  4 05:29:11 np0005545273 lvm[197631]: VG ceph_vg0 finished
Dec  4 05:29:11 np0005545273 lvm[197645]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:29:11 np0005545273 lvm[197645]: VG ceph_vg2 finished
Dec  4 05:29:11 np0005545273 wizardly_shamir[197352]: {}
Dec  4 05:29:11 np0005545273 systemd[1]: libpod-36f5d3595f3d31d243cfc51838a42f709fb4aa0fc987d73b509bcf868bcda9b8.scope: Deactivated successfully.
Dec  4 05:29:11 np0005545273 podman[197330]: 2025-12-04 10:29:11.599170272 +0000 UTC m=+1.016229441 container died 36f5d3595f3d31d243cfc51838a42f709fb4aa0fc987d73b509bcf868bcda9b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  4 05:29:11 np0005545273 systemd[1]: libpod-36f5d3595f3d31d243cfc51838a42f709fb4aa0fc987d73b509bcf868bcda9b8.scope: Consumed 1.379s CPU time.
Dec  4 05:29:11 np0005545273 systemd[1]: var-lib-containers-storage-overlay-889ee54a9e384495cc173935a06dcf8b7255b347757bd5760cf623ba2ab63815-merged.mount: Deactivated successfully.
Dec  4 05:29:11 np0005545273 podman[197330]: 2025-12-04 10:29:11.676869325 +0000 UTC m=+1.093928474 container remove 36f5d3595f3d31d243cfc51838a42f709fb4aa0fc987d73b509bcf868bcda9b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS)
Dec  4 05:29:11 np0005545273 systemd[1]: libpod-conmon-36f5d3595f3d31d243cfc51838a42f709fb4aa0fc987d73b509bcf868bcda9b8.scope: Deactivated successfully.
Dec  4 05:29:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:29:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:29:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:29:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:29:11 np0005545273 python3.9[197712]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:29:12 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:29:12 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:29:12 np0005545273 python3.9[197870]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764844151.313199-554-213992733322014/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:13 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v551: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:29:13 np0005545273 python3.9[198022]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:29:13 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:29:13 np0005545273 python3.9[198147]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764844152.5878441-554-279224405908328/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:14 np0005545273 python3.9[198299]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:29:14 np0005545273 python3.9[198422]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764844153.759381-554-99471758427157/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:15 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v552: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:29:15 np0005545273 python3.9[198574]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:29:15 np0005545273 python3.9[198699]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764844154.8776581-554-259186467013238/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:16 np0005545273 auditd[705]: Audit daemon rotating log files
Dec  4 05:29:16 np0005545273 python3.9[198851]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Dec  4 05:29:17 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v553: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:29:17 np0005545273 python3.9[199004]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:18 np0005545273 python3.9[199156]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:18 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:29:18 np0005545273 python3.9[199308]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:19 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v554: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:29:19 np0005545273 python3.9[199460]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:19 np0005545273 python3.9[199612]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:20 np0005545273 python3.9[199764]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:21 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v555: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:29:21 np0005545273 python3.9[199916]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:22 np0005545273 python3.9[200068]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:22 np0005545273 python3.9[200220]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:23 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v556: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:29:23 np0005545273 python3.9[200372]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:23 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:29:23 np0005545273 podman[200496]: 2025-12-04 10:29:23.902203057 +0000 UTC m=+0.141957549 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  4 05:29:23 np0005545273 python3.9[200541]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:24 np0005545273 python3.9[200702]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:25 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v557: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:29:25 np0005545273 python3.9[200854]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:25 np0005545273 podman[200978]: 2025-12-04 10:29:25.666606033 +0000 UTC m=+0.061953289 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  4 05:29:25 np0005545273 python3.9[201025]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:26 np0005545273 python3.9[201177]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:29:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:29:26
Dec  4 05:29:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:29:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:29:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['volumes', 'default.rgw.control', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'backups', 'vms', 'cephfs.cephfs.data', '.rgw.root']
Dec  4 05:29:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:29:27 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v558: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:29:27 np0005545273 python3.9[201302]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764844166.0674963-775-224813271706822/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:27 np0005545273 python3.9[201454]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:29:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:29:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:29:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:29:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:29:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:29:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:29:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:29:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:29:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:29:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:29:27 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:29:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:29:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:29:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:29:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:29:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:29:28 np0005545273 python3.9[201577]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764844167.310295-775-98094292437953/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:29:28 np0005545273 python3.9[201731]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:29:29 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v559: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:29:29 np0005545273 python3.9[201856]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764844168.4497063-775-72542875163687/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:29 np0005545273 python3.9[202008]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:29:30 np0005545273 python3.9[202131]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764844169.5283613-775-135504904442289/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:31 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v560: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:29:31 np0005545273 python3.9[202285]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:29:31 np0005545273 python3.9[202408]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764844170.5996962-775-25796917775519/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:32 np0005545273 python3.9[202560]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:29:33 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v561: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:29:33 np0005545273 python3.9[202683]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764844171.9442172-775-63282791572148/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:33 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:29:33 np0005545273 python3.9[202835]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:29:34 np0005545273 python3.9[202958]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764844173.2911599-775-52485824383788/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:34 np0005545273 python3.9[203110]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:29:35 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v562: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:29:35 np0005545273 python3.9[203233]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764844174.5129876-775-40357442603023/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:36 np0005545273 python3.9[203385]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:29:36 np0005545273 python3.9[203508]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764844175.611288-775-120937440869745/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:37 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v563: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:29:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:29:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:29:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:29:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:29:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:29:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:29:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:29:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:29:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:29:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:29:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:29:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:29:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec  4 05:29:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:29:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:29:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:29:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:29:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:29:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 05:29:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:29:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:29:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:29:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 05:29:37 np0005545273 python3.9[203660]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:29:37 np0005545273 python3.9[203783]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764844176.781864-775-213560848679011/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:38 np0005545273 python3.9[203935]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:29:38 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:29:38 np0005545273 python3.9[204058]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764844177.9276597-775-186947216705732/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:39 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v564: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:29:39 np0005545273 python3.9[204210]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:29:39 np0005545273 python3.9[204333]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764844179.0446048-775-280373490400858/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:40 np0005545273 python3.9[204485]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:29:41 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v565: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:29:41 np0005545273 python3.9[204608]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764844180.0977564-775-102688361263797/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:41 np0005545273 python3.9[204760]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:29:42 np0005545273 python3.9[204883]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764844181.2038312-775-235087340015284/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:42 np0005545273 python3.9[205033]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:29:43 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v566: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:29:43 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:29:43 np0005545273 python3.9[205188]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec  4 05:29:45 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v567: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:29:46 np0005545273 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Dec  4 05:29:46 np0005545273 python3.9[205344]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:46 np0005545273 python3.9[205496]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:47 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v568: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:29:47 np0005545273 python3.9[205648]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:48 np0005545273 python3.9[205800]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:48 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:29:48 np0005545273 python3.9[205952]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:49 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v569: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:29:49 np0005545273 python3.9[206104]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:50 np0005545273 python3.9[206256]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:50 np0005545273 python3.9[206408]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:51 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v570: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:29:51 np0005545273 python3.9[206560]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:51 np0005545273 python3.9[206712]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:52 np0005545273 python3.9[206864]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  4 05:29:52 np0005545273 systemd[1]: Reloading.
Dec  4 05:29:52 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:29:52 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:29:53 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v571: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:29:53 np0005545273 systemd[1]: Starting libvirt logging daemon socket...
Dec  4 05:29:53 np0005545273 systemd[1]: Listening on libvirt logging daemon socket.
Dec  4 05:29:53 np0005545273 systemd[1]: Starting libvirt logging daemon admin socket...
Dec  4 05:29:53 np0005545273 systemd[1]: Listening on libvirt logging daemon admin socket.
Dec  4 05:29:53 np0005545273 systemd[1]: Starting libvirt logging daemon...
Dec  4 05:29:53 np0005545273 systemd[1]: Started libvirt logging daemon.
Dec  4 05:29:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:29:53 np0005545273 python3.9[207056]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  4 05:29:54 np0005545273 systemd[1]: Reloading.
Dec  4 05:29:54 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:29:54 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:29:54 np0005545273 podman[207058]: 2025-12-04 10:29:54.150530574 +0000 UTC m=+0.130692282 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Dec  4 05:29:54 np0005545273 systemd[1]: Starting libvirt nodedev daemon socket...
Dec  4 05:29:54 np0005545273 systemd[1]: Listening on libvirt nodedev daemon socket.
Dec  4 05:29:54 np0005545273 systemd[1]: Starting libvirt nodedev daemon admin socket...
Dec  4 05:29:54 np0005545273 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Dec  4 05:29:54 np0005545273 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Dec  4 05:29:54 np0005545273 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Dec  4 05:29:54 np0005545273 systemd[1]: Starting libvirt nodedev daemon...
Dec  4 05:29:54 np0005545273 systemd[1]: Started libvirt nodedev daemon.
Dec  4 05:29:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:29:54.895 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:29:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:29:54.897 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:29:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:29:54.897 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:29:55 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v572: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:29:55 np0005545273 python3.9[207297]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  4 05:29:55 np0005545273 systemd[1]: Reloading.
Dec  4 05:29:55 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:29:55 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:29:55 np0005545273 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Dec  4 05:29:55 np0005545273 systemd[1]: Starting libvirt proxy daemon admin socket...
Dec  4 05:29:55 np0005545273 systemd[1]: Starting libvirt proxy daemon read-only socket...
Dec  4 05:29:55 np0005545273 systemd[1]: Listening on libvirt proxy daemon admin socket.
Dec  4 05:29:55 np0005545273 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Dec  4 05:29:55 np0005545273 systemd[1]: Starting libvirt proxy daemon...
Dec  4 05:29:55 np0005545273 systemd[1]: Started libvirt proxy daemon.
Dec  4 05:29:55 np0005545273 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Dec  4 05:29:55 np0005545273 podman[207434]: 2025-12-04 10:29:55.917035813 +0000 UTC m=+0.056478559 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec  4 05:29:55 np0005545273 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Dec  4 05:29:56 np0005545273 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Dec  4 05:29:56 np0005545273 python3.9[207536]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  4 05:29:56 np0005545273 systemd[1]: Reloading.
Dec  4 05:29:56 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:29:56 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:29:56 np0005545273 systemd[1]: Listening on libvirt locking daemon socket.
Dec  4 05:29:56 np0005545273 systemd[1]: Starting libvirt QEMU daemon socket...
Dec  4 05:29:56 np0005545273 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec  4 05:29:56 np0005545273 systemd[1]: Starting Virtual Machine and Container Registration Service...
Dec  4 05:29:56 np0005545273 systemd[1]: Listening on libvirt QEMU daemon socket.
Dec  4 05:29:56 np0005545273 systemd[1]: Starting libvirt QEMU daemon admin socket...
Dec  4 05:29:56 np0005545273 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Dec  4 05:29:56 np0005545273 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Dec  4 05:29:56 np0005545273 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Dec  4 05:29:56 np0005545273 systemd[1]: Started Virtual Machine and Container Registration Service.
Dec  4 05:29:56 np0005545273 systemd[1]: Starting libvirt QEMU daemon...
Dec  4 05:29:56 np0005545273 systemd[1]: Started libvirt QEMU daemon.
Dec  4 05:29:56 np0005545273 setroubleshoot[207334]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 849d845a-6d91-417d-93c6-3983faec16d6
Dec  4 05:29:56 np0005545273 setroubleshoot[207334]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Dec  4 05:29:56 np0005545273 setroubleshoot[207334]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 849d845a-6d91-417d-93c6-3983faec16d6
Dec  4 05:29:56 np0005545273 setroubleshoot[207334]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Dec  4 05:29:57 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v573: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:29:57 np0005545273 python3.9[207754]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  4 05:29:57 np0005545273 systemd[1]: Reloading.
Dec  4 05:29:57 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:29:57 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:29:57 np0005545273 systemd[1]: Starting libvirt secret daemon socket...
Dec  4 05:29:57 np0005545273 systemd[1]: Listening on libvirt secret daemon socket.
Dec  4 05:29:57 np0005545273 systemd[1]: Starting libvirt secret daemon admin socket...
Dec  4 05:29:57 np0005545273 systemd[1]: Starting libvirt secret daemon read-only socket...
Dec  4 05:29:57 np0005545273 systemd[1]: Listening on libvirt secret daemon admin socket.
Dec  4 05:29:57 np0005545273 systemd[1]: Listening on libvirt secret daemon read-only socket.
Dec  4 05:29:57 np0005545273 systemd[1]: Starting libvirt secret daemon...
Dec  4 05:29:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:29:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:29:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:29:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:29:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:29:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:29:57 np0005545273 systemd[1]: Started libvirt secret daemon.
Dec  4 05:29:58 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:29:58 np0005545273 python3.9[207965]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:29:59 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v574: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:29:59 np0005545273 python3.9[208117]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  4 05:30:00 np0005545273 python3.9[208269]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:30:00 np0005545273 python3.9[208423]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  4 05:30:01 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v575: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:30:01 np0005545273 python3.9[208573]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:30:02 np0005545273 python3.9[208694]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764844201.139754-1133-66970071124744/.source.xml follow=False _original_basename=secret.xml.j2 checksum=48aecb49cd31a3c01b7ae17e3d1019c6e6eee501 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:30:02 np0005545273 python3.9[208846]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:30:03 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v576: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:30:03 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:30:03 np0005545273 python3.9[209008]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:30:05 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v577: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:30:05 np0005545273 python3.9[209471]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:30:06 np0005545273 python3.9[209623]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:30:07 np0005545273 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Dec  4 05:30:07 np0005545273 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.000s CPU time.
Dec  4 05:30:07 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v578: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:30:07 np0005545273 systemd[1]: setroubleshootd.service: Deactivated successfully.
Dec  4 05:30:07 np0005545273 python3.9[209746]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764844206.0544598-1188-139999483372688/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:30:07 np0005545273 python3.9[209899]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:30:08 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:30:08 np0005545273 python3.9[210051]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:30:09 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v579: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:30:09 np0005545273 python3.9[210129]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:30:09 np0005545273 python3.9[210281]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:30:10 np0005545273 python3.9[210359]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.cci2evha recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:30:10 np0005545273 python3.9[210511]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:30:11 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v580: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:30:11 np0005545273 python3.9[210589]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:30:11 np0005545273 python3.9[210741]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:30:12 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:30:12 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:30:12 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:30:12 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:30:12 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:30:12 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:30:12 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:30:12 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:30:12 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:30:12 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:30:12 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:30:12 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:30:12 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:30:12 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:30:12 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:30:12 np0005545273 python3[210964]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  4 05:30:12 np0005545273 podman[211115]: 2025-12-04 10:30:12.975735374 +0000 UTC m=+0.041879308 container create f78af6297ca86d097076a3d63d5c38beba5e601292f19faecfa0a16556428894 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:30:13 np0005545273 systemd[1]: Started libpod-conmon-f78af6297ca86d097076a3d63d5c38beba5e601292f19faecfa0a16556428894.scope.
Dec  4 05:30:13 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:30:13 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v581: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:30:13 np0005545273 podman[211115]: 2025-12-04 10:30:12.960386323 +0000 UTC m=+0.026530267 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:30:13 np0005545273 podman[211115]: 2025-12-04 10:30:13.06078072 +0000 UTC m=+0.126924684 container init f78af6297ca86d097076a3d63d5c38beba5e601292f19faecfa0a16556428894 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_davinci, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec  4 05:30:13 np0005545273 podman[211115]: 2025-12-04 10:30:13.068549989 +0000 UTC m=+0.134693913 container start f78af6297ca86d097076a3d63d5c38beba5e601292f19faecfa0a16556428894 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_davinci, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:30:13 np0005545273 podman[211115]: 2025-12-04 10:30:13.073416853 +0000 UTC m=+0.139560807 container attach f78af6297ca86d097076a3d63d5c38beba5e601292f19faecfa0a16556428894 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:30:13 np0005545273 systemd[1]: libpod-f78af6297ca86d097076a3d63d5c38beba5e601292f19faecfa0a16556428894.scope: Deactivated successfully.
Dec  4 05:30:13 np0005545273 quizzical_davinci[211154]: 167 167
Dec  4 05:30:13 np0005545273 conmon[211154]: conmon f78af6297ca86d097076 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f78af6297ca86d097076a3d63d5c38beba5e601292f19faecfa0a16556428894.scope/container/memory.events
Dec  4 05:30:13 np0005545273 podman[211115]: 2025-12-04 10:30:13.075528457 +0000 UTC m=+0.141672381 container died f78af6297ca86d097076a3d63d5c38beba5e601292f19faecfa0a16556428894 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_davinci, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:30:13 np0005545273 systemd[1]: var-lib-containers-storage-overlay-2fa842cd248bdd189fa795eefbb82e6719414af03faf0c64ebcb3cee9f0e2226-merged.mount: Deactivated successfully.
Dec  4 05:30:13 np0005545273 podman[211115]: 2025-12-04 10:30:13.114835708 +0000 UTC m=+0.180979642 container remove f78af6297ca86d097076a3d63d5c38beba5e601292f19faecfa0a16556428894 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  4 05:30:13 np0005545273 systemd[1]: libpod-conmon-f78af6297ca86d097076a3d63d5c38beba5e601292f19faecfa0a16556428894.scope: Deactivated successfully.
Dec  4 05:30:13 np0005545273 podman[211228]: 2025-12-04 10:30:13.269776486 +0000 UTC m=+0.042742920 container create 6ef81efebbc53cbb189be3f5db2c0ebf9e6fb065a3f3bf39fdf3cf056be3c9c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:30:13 np0005545273 systemd[1]: Started libpod-conmon-6ef81efebbc53cbb189be3f5db2c0ebf9e6fb065a3f3bf39fdf3cf056be3c9c4.scope.
Dec  4 05:30:13 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:30:13 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6f63a4abc67a91cc007f69e357ef1f40b827c179693cad259fd0d3dc47fb687/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:30:13 np0005545273 podman[211228]: 2025-12-04 10:30:13.251281685 +0000 UTC m=+0.024248119 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:30:13 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6f63a4abc67a91cc007f69e357ef1f40b827c179693cad259fd0d3dc47fb687/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:30:13 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6f63a4abc67a91cc007f69e357ef1f40b827c179693cad259fd0d3dc47fb687/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:30:13 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6f63a4abc67a91cc007f69e357ef1f40b827c179693cad259fd0d3dc47fb687/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:30:13 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6f63a4abc67a91cc007f69e357ef1f40b827c179693cad259fd0d3dc47fb687/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:30:13 np0005545273 podman[211228]: 2025-12-04 10:30:13.372415481 +0000 UTC m=+0.145381935 container init 6ef81efebbc53cbb189be3f5db2c0ebf9e6fb065a3f3bf39fdf3cf056be3c9c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_sammet, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:30:13 np0005545273 podman[211228]: 2025-12-04 10:30:13.380122748 +0000 UTC m=+0.153089182 container start 6ef81efebbc53cbb189be3f5db2c0ebf9e6fb065a3f3bf39fdf3cf056be3c9c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_sammet, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:30:13 np0005545273 podman[211228]: 2025-12-04 10:30:13.383833962 +0000 UTC m=+0.156800426 container attach 6ef81efebbc53cbb189be3f5db2c0ebf9e6fb065a3f3bf39fdf3cf056be3c9c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:30:13 np0005545273 python3.9[211236]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:30:13 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:30:13 np0005545273 vibrant_sammet[211245]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:30:13 np0005545273 vibrant_sammet[211245]: --> All data devices are unavailable
Dec  4 05:30:13 np0005545273 python3.9[211333]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:30:13 np0005545273 systemd[1]: libpod-6ef81efebbc53cbb189be3f5db2c0ebf9e6fb065a3f3bf39fdf3cf056be3c9c4.scope: Deactivated successfully.
Dec  4 05:30:13 np0005545273 podman[211343]: 2025-12-04 10:30:13.889845406 +0000 UTC m=+0.023247874 container died 6ef81efebbc53cbb189be3f5db2c0ebf9e6fb065a3f3bf39fdf3cf056be3c9c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_sammet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:30:14 np0005545273 systemd[1]: var-lib-containers-storage-overlay-f6f63a4abc67a91cc007f69e357ef1f40b827c179693cad259fd0d3dc47fb687-merged.mount: Deactivated successfully.
Dec  4 05:30:14 np0005545273 podman[211343]: 2025-12-04 10:30:14.039125339 +0000 UTC m=+0.172527797 container remove 6ef81efebbc53cbb189be3f5db2c0ebf9e6fb065a3f3bf39fdf3cf056be3c9c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:30:14 np0005545273 systemd[1]: libpod-conmon-6ef81efebbc53cbb189be3f5db2c0ebf9e6fb065a3f3bf39fdf3cf056be3c9c4.scope: Deactivated successfully.
Dec  4 05:30:14 np0005545273 podman[211573]: 2025-12-04 10:30:14.467816421 +0000 UTC m=+0.045613152 container create 4f3872aceee8a827c2c490c7cf1b24fd90bdeb303fa6a9198c93bb36f77232d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_clarke, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  4 05:30:14 np0005545273 systemd[1]: Started libpod-conmon-4f3872aceee8a827c2c490c7cf1b24fd90bdeb303fa6a9198c93bb36f77232d3.scope.
Dec  4 05:30:14 np0005545273 python3.9[211559]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:30:14 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:30:14 np0005545273 podman[211573]: 2025-12-04 10:30:14.449151096 +0000 UTC m=+0.026947867 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:30:14 np0005545273 podman[211573]: 2025-12-04 10:30:14.550308153 +0000 UTC m=+0.128104904 container init 4f3872aceee8a827c2c490c7cf1b24fd90bdeb303fa6a9198c93bb36f77232d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  4 05:30:14 np0005545273 podman[211573]: 2025-12-04 10:30:14.558679507 +0000 UTC m=+0.136476248 container start 4f3872aceee8a827c2c490c7cf1b24fd90bdeb303fa6a9198c93bb36f77232d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_clarke, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:30:14 np0005545273 podman[211573]: 2025-12-04 10:30:14.562156326 +0000 UTC m=+0.139953057 container attach 4f3872aceee8a827c2c490c7cf1b24fd90bdeb303fa6a9198c93bb36f77232d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:30:14 np0005545273 xenodochial_clarke[211590]: 167 167
Dec  4 05:30:14 np0005545273 systemd[1]: libpod-4f3872aceee8a827c2c490c7cf1b24fd90bdeb303fa6a9198c93bb36f77232d3.scope: Deactivated successfully.
Dec  4 05:30:14 np0005545273 podman[211573]: 2025-12-04 10:30:14.564378052 +0000 UTC m=+0.142174793 container died 4f3872aceee8a827c2c490c7cf1b24fd90bdeb303fa6a9198c93bb36f77232d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_clarke, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec  4 05:30:14 np0005545273 systemd[1]: var-lib-containers-storage-overlay-b2e22aea59047f045c245f7b545d8e9a0fbb5ca07d13de35958dbe3932b6c903-merged.mount: Deactivated successfully.
Dec  4 05:30:14 np0005545273 podman[211573]: 2025-12-04 10:30:14.599743394 +0000 UTC m=+0.177540135 container remove 4f3872aceee8a827c2c490c7cf1b24fd90bdeb303fa6a9198c93bb36f77232d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_clarke, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  4 05:30:14 np0005545273 systemd[1]: libpod-conmon-4f3872aceee8a827c2c490c7cf1b24fd90bdeb303fa6a9198c93bb36f77232d3.scope: Deactivated successfully.
Dec  4 05:30:14 np0005545273 podman[211663]: 2025-12-04 10:30:14.754580958 +0000 UTC m=+0.044413632 container create 0064f5ffc22ad4b89a1ab0eb35fe911c2b63e0668316dada91c5ec9585310ca9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  4 05:30:14 np0005545273 systemd[1]: Started libpod-conmon-0064f5ffc22ad4b89a1ab0eb35fe911c2b63e0668316dada91c5ec9585310ca9.scope.
Dec  4 05:30:14 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:30:14 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c18f6ebd8a909f53c00b858541553fc4f0aca0f52234c4a6da92e9f68aef43d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:30:14 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c18f6ebd8a909f53c00b858541553fc4f0aca0f52234c4a6da92e9f68aef43d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:30:14 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c18f6ebd8a909f53c00b858541553fc4f0aca0f52234c4a6da92e9f68aef43d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:30:14 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c18f6ebd8a909f53c00b858541553fc4f0aca0f52234c4a6da92e9f68aef43d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:30:14 np0005545273 podman[211663]: 2025-12-04 10:30:14.73504398 +0000 UTC m=+0.024876684 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:30:14 np0005545273 podman[211663]: 2025-12-04 10:30:14.84059706 +0000 UTC m=+0.130429764 container init 0064f5ffc22ad4b89a1ab0eb35fe911c2b63e0668316dada91c5ec9585310ca9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_curie, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:30:14 np0005545273 podman[211663]: 2025-12-04 10:30:14.846628754 +0000 UTC m=+0.136461438 container start 0064f5ffc22ad4b89a1ab0eb35fe911c2b63e0668316dada91c5ec9585310ca9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_curie, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:30:14 np0005545273 podman[211663]: 2025-12-04 10:30:14.849795394 +0000 UTC m=+0.139628078 container attach 0064f5ffc22ad4b89a1ab0eb35fe911c2b63e0668316dada91c5ec9585310ca9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_curie, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:30:15 np0005545273 python3.9[211708]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:30:15 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v582: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]: {
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:    "0": [
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:        {
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            "devices": [
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "/dev/loop3"
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            ],
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            "lv_name": "ceph_lv0",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            "lv_size": "21470642176",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            "name": "ceph_lv0",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            "tags": {
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.cluster_name": "ceph",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.crush_device_class": "",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.encrypted": "0",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.objectstore": "bluestore",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.osd_id": "0",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.type": "block",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.vdo": "0",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.with_tpm": "0"
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            },
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            "type": "block",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            "vg_name": "ceph_vg0"
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:        }
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:    ],
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:    "1": [
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:        {
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            "devices": [
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "/dev/loop4"
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            ],
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            "lv_name": "ceph_lv1",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            "lv_size": "21470642176",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            "name": "ceph_lv1",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            "tags": {
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.cluster_name": "ceph",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.crush_device_class": "",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.encrypted": "0",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.objectstore": "bluestore",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.osd_id": "1",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.type": "block",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.vdo": "0",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.with_tpm": "0"
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            },
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            "type": "block",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            "vg_name": "ceph_vg1"
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:        }
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:    ],
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:    "2": [
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:        {
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            "devices": [
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "/dev/loop5"
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            ],
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            "lv_name": "ceph_lv2",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            "lv_size": "21470642176",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            "name": "ceph_lv2",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            "tags": {
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.cluster_name": "ceph",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.crush_device_class": "",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.encrypted": "0",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.objectstore": "bluestore",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.osd_id": "2",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.type": "block",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.vdo": "0",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:                "ceph.with_tpm": "0"
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            },
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            "type": "block",
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:            "vg_name": "ceph_vg2"
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:        }
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]:    ]
Dec  4 05:30:15 np0005545273 eloquent_curie[211709]: }
Dec  4 05:30:15 np0005545273 systemd[1]: libpod-0064f5ffc22ad4b89a1ab0eb35fe911c2b63e0668316dada91c5ec9585310ca9.scope: Deactivated successfully.
Dec  4 05:30:15 np0005545273 podman[211663]: 2025-12-04 10:30:15.149976973 +0000 UTC m=+0.439809697 container died 0064f5ffc22ad4b89a1ab0eb35fe911c2b63e0668316dada91c5ec9585310ca9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_curie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  4 05:30:15 np0005545273 systemd[1]: var-lib-containers-storage-overlay-c18f6ebd8a909f53c00b858541553fc4f0aca0f52234c4a6da92e9f68aef43d8-merged.mount: Deactivated successfully.
Dec  4 05:30:15 np0005545273 podman[211663]: 2025-12-04 10:30:15.196325744 +0000 UTC m=+0.486158418 container remove 0064f5ffc22ad4b89a1ab0eb35fe911c2b63e0668316dada91c5ec9585310ca9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_curie, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:30:15 np0005545273 systemd[1]: libpod-conmon-0064f5ffc22ad4b89a1ab0eb35fe911c2b63e0668316dada91c5ec9585310ca9.scope: Deactivated successfully.
Dec  4 05:30:15 np0005545273 python3.9[211932]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:30:15 np0005545273 podman[211945]: 2025-12-04 10:30:15.591700618 +0000 UTC m=+0.021931900 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:30:16 np0005545273 python3.9[212036]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:30:16 np0005545273 podman[211945]: 2025-12-04 10:30:16.247633702 +0000 UTC m=+0.677864954 container create ae19e0dceed350ae59340dbabcbaedb9b503ad25316f60a1446779e30f8a2d3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_dijkstra, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:30:16 np0005545273 systemd[1]: Started libpod-conmon-ae19e0dceed350ae59340dbabcbaedb9b503ad25316f60a1446779e30f8a2d3d.scope.
Dec  4 05:30:16 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:30:16 np0005545273 podman[211945]: 2025-12-04 10:30:16.341430301 +0000 UTC m=+0.771661593 container init ae19e0dceed350ae59340dbabcbaedb9b503ad25316f60a1446779e30f8a2d3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_dijkstra, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:30:16 np0005545273 podman[211945]: 2025-12-04 10:30:16.34965915 +0000 UTC m=+0.779890412 container start ae19e0dceed350ae59340dbabcbaedb9b503ad25316f60a1446779e30f8a2d3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:30:16 np0005545273 podman[211945]: 2025-12-04 10:30:16.353291853 +0000 UTC m=+0.783523115 container attach ae19e0dceed350ae59340dbabcbaedb9b503ad25316f60a1446779e30f8a2d3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_dijkstra, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  4 05:30:16 np0005545273 trusting_dijkstra[212109]: 167 167
Dec  4 05:30:16 np0005545273 systemd[1]: libpod-ae19e0dceed350ae59340dbabcbaedb9b503ad25316f60a1446779e30f8a2d3d.scope: Deactivated successfully.
Dec  4 05:30:16 np0005545273 podman[211945]: 2025-12-04 10:30:16.356166496 +0000 UTC m=+0.786397758 container died ae19e0dceed350ae59340dbabcbaedb9b503ad25316f60a1446779e30f8a2d3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_dijkstra, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:30:16 np0005545273 systemd[1]: var-lib-containers-storage-overlay-99a8ba46e8bda43443296e45247d7177d3cbde2f12352d1d704612089e361707-merged.mount: Deactivated successfully.
Dec  4 05:30:16 np0005545273 podman[211945]: 2025-12-04 10:30:16.393960729 +0000 UTC m=+0.824191991 container remove ae19e0dceed350ae59340dbabcbaedb9b503ad25316f60a1446779e30f8a2d3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_dijkstra, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  4 05:30:16 np0005545273 systemd[1]: libpod-conmon-ae19e0dceed350ae59340dbabcbaedb9b503ad25316f60a1446779e30f8a2d3d.scope: Deactivated successfully.
Dec  4 05:30:16 np0005545273 podman[212186]: 2025-12-04 10:30:16.521448058 +0000 UTC m=+0.021380216 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:30:16 np0005545273 podman[212186]: 2025-12-04 10:30:16.735774378 +0000 UTC m=+0.235706516 container create daf9d48cf2662a3b6e20d702ce08e47c7759f72255d67ff161d646535fdc5751 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  4 05:30:16 np0005545273 systemd[1]: Started libpod-conmon-daf9d48cf2662a3b6e20d702ce08e47c7759f72255d67ff161d646535fdc5751.scope.
Dec  4 05:30:16 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:30:16 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14f5718bb39f846e984b9a902084754972f1c7629b7ccbc64f2e989321b722cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:30:16 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14f5718bb39f846e984b9a902084754972f1c7629b7ccbc64f2e989321b722cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:30:16 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14f5718bb39f846e984b9a902084754972f1c7629b7ccbc64f2e989321b722cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:30:16 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14f5718bb39f846e984b9a902084754972f1c7629b7ccbc64f2e989321b722cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:30:16 np0005545273 podman[212186]: 2025-12-04 10:30:16.841513673 +0000 UTC m=+0.341445831 container init daf9d48cf2662a3b6e20d702ce08e47c7759f72255d67ff161d646535fdc5751 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:30:16 np0005545273 podman[212186]: 2025-12-04 10:30:16.849961328 +0000 UTC m=+0.349893466 container start daf9d48cf2662a3b6e20d702ce08e47c7759f72255d67ff161d646535fdc5751 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_antonelli, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  4 05:30:16 np0005545273 podman[212186]: 2025-12-04 10:30:16.85435321 +0000 UTC m=+0.354285368 container attach daf9d48cf2662a3b6e20d702ce08e47c7759f72255d67ff161d646535fdc5751 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_antonelli, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:30:16 np0005545273 python3.9[212230]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:30:17 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v583: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:30:17 np0005545273 python3.9[212326]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:30:17 np0005545273 lvm[212438]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:30:17 np0005545273 lvm[212437]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:30:17 np0005545273 lvm[212438]: VG ceph_vg1 finished
Dec  4 05:30:17 np0005545273 lvm[212437]: VG ceph_vg0 finished
Dec  4 05:30:17 np0005545273 lvm[212440]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:30:17 np0005545273 lvm[212440]: VG ceph_vg2 finished
Dec  4 05:30:17 np0005545273 lucid_antonelli[212234]: {}
Dec  4 05:30:17 np0005545273 systemd[1]: libpod-daf9d48cf2662a3b6e20d702ce08e47c7759f72255d67ff161d646535fdc5751.scope: Deactivated successfully.
Dec  4 05:30:17 np0005545273 systemd[1]: libpod-daf9d48cf2662a3b6e20d702ce08e47c7759f72255d67ff161d646535fdc5751.scope: Consumed 1.274s CPU time.
Dec  4 05:30:17 np0005545273 podman[212186]: 2025-12-04 10:30:17.62907943 +0000 UTC m=+1.129011568 container died daf9d48cf2662a3b6e20d702ce08e47c7759f72255d67ff161d646535fdc5751 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  4 05:30:17 np0005545273 python3.9[212558]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:30:18 np0005545273 systemd[1]: var-lib-containers-storage-overlay-14f5718bb39f846e984b9a902084754972f1c7629b7ccbc64f2e989321b722cd-merged.mount: Deactivated successfully.
Dec  4 05:30:18 np0005545273 podman[212186]: 2025-12-04 10:30:18.501009885 +0000 UTC m=+2.000942023 container remove daf9d48cf2662a3b6e20d702ce08e47c7759f72255d67ff161d646535fdc5751 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_antonelli, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  4 05:30:18 np0005545273 systemd[1]: libpod-conmon-daf9d48cf2662a3b6e20d702ce08e47c7759f72255d67ff161d646535fdc5751.scope: Deactivated successfully.
Dec  4 05:30:18 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:30:18 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:30:18 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:30:18 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:30:18 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:30:18 np0005545273 python3.9[212683]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764844217.4834547-1313-228323929181990/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:30:19 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v584: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:30:19 np0005545273 python3.9[212861]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:30:19 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:30:19 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:30:19 np0005545273 python3.9[213013]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:30:20 np0005545273 python3.9[213168]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:30:21 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v585: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:30:21 np0005545273 python3.9[213320]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:30:21 np0005545273 python3.9[213473]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:30:22 np0005545273 python3.9[213627]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:30:23 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v586: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:30:23 np0005545273 python3.9[213782]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:30:23 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:30:23 np0005545273 python3.9[213934]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:30:24 np0005545273 python3.9[214057]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764844223.4558349-1385-188262302337939/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:30:24 np0005545273 podman[214181]: 2025-12-04 10:30:24.953134196 +0000 UTC m=+0.090422661 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  4 05:30:25 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v587: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:30:25 np0005545273 python3.9[214227]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:30:25 np0005545273 python3.9[214358]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764844224.5838633-1400-191982531484134/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:30:26 np0005545273 podman[214482]: 2025-12-04 10:30:26.186250394 +0000 UTC m=+0.057793647 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  4 05:30:26 np0005545273 python3.9[214529]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:30:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:30:26
Dec  4 05:30:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:30:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:30:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['default.rgw.log', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'backups', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr']
Dec  4 05:30:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:30:26 np0005545273 python3.9[214652]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764844225.9014478-1415-146146697725042/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:30:27 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v588: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:30:27 np0005545273 python3.9[214804]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:30:27 np0005545273 systemd[1]: Reloading.
Dec  4 05:30:27 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:30:27 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:30:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:30:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:30:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:30:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:30:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:30:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:30:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:30:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:30:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:30:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:30:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:30:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:30:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:30:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:30:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:30:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:30:28 np0005545273 systemd[1]: Reached target edpm_libvirt.target.
Dec  4 05:30:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:30:28 np0005545273 python3.9[214995]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  4 05:30:28 np0005545273 systemd[1]: Reloading.
Dec  4 05:30:29 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:30:29 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:30:29 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v589: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:30:29 np0005545273 systemd[1]: Reloading.
Dec  4 05:30:29 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:30:29 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:30:30 np0005545273 systemd[1]: session-49.scope: Deactivated successfully.
Dec  4 05:30:30 np0005545273 systemd[1]: session-49.scope: Consumed 3min 28.019s CPU time.
Dec  4 05:30:30 np0005545273 systemd-logind[798]: Session 49 logged out. Waiting for processes to exit.
Dec  4 05:30:30 np0005545273 systemd-logind[798]: Removed session 49.
Dec  4 05:30:31 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v590: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:30:33 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v591: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:30:33 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:30:35 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v592: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:30:35 np0005545273 systemd-logind[798]: New session 50 of user zuul.
Dec  4 05:30:35 np0005545273 systemd[1]: Started Session 50 of User zuul.
Dec  4 05:30:36 np0005545273 python3.9[215243]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:30:37 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v593: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:30:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:30:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:30:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:30:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:30:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:30:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:30:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:30:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:30:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:30:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:30:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:30:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:30:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec  4 05:30:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:30:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:30:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:30:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:30:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:30:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 05:30:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:30:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:30:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:30:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 05:30:37 np0005545273 python3.9[215397]: ansible-ansible.builtin.service_facts Invoked
Dec  4 05:30:37 np0005545273 network[215416]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  4 05:30:37 np0005545273 network[215417]: 'network-scripts' will be removed from distribution in near future.
Dec  4 05:30:37 np0005545273 network[215418]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  4 05:30:38 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:30:39 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v594: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:30:41 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v595: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:30:42 np0005545273 python3.9[215691]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  4 05:30:43 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v596: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:30:43 np0005545273 python3.9[215775]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  4 05:30:43 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:30:45 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v597: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:30:47 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v598: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:30:48 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:30:49 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v599: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:30:49 np0005545273 python3.9[215932]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:30:50 np0005545273 python3.9[216084]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:30:51 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v600: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:30:51 np0005545273 python3.9[216237]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:30:52 np0005545273 python3.9[216389]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:30:53 np0005545273 python3.9[216542]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:30:53 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v601: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:30:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:30:53 np0005545273 python3.9[216665]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764844252.4737446-95-225401281664202/.source.iscsi _original_basename=.gpr_it4z follow=False checksum=a94c711a5f59472c43c3025afd5714c35f3718f9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:30:54 np0005545273 python3.9[216817]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:30:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:30:54.896 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:30:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:30:54.898 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:30:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:30:54.898 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:30:55 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v602: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:30:55 np0005545273 podman[216941]: 2025-12-04 10:30:55.287281757 +0000 UTC m=+0.098247092 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec  4 05:30:55 np0005545273 python3.9[216987]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:30:56 np0005545273 podman[217147]: 2025-12-04 10:30:56.335787211 +0000 UTC m=+0.053201235 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:30:56 np0005545273 python3.9[217148]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:30:56 np0005545273 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Dec  4 05:30:57 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v603: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:30:57 np0005545273 python3.9[217322]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:30:57 np0005545273 systemd[1]: Reloading.
Dec  4 05:30:57 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:30:57 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:30:57 np0005545273 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec  4 05:30:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:30:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:30:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:30:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:30:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:30:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:30:57 np0005545273 systemd[1]: Starting Open-iSCSI...
Dec  4 05:30:57 np0005545273 kernel: Loading iSCSI transport class v2.0-870.
Dec  4 05:30:57 np0005545273 systemd[1]: Started Open-iSCSI.
Dec  4 05:30:57 np0005545273 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Dec  4 05:30:57 np0005545273 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Dec  4 05:30:58 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:30:58 np0005545273 python3.9[217522]: ansible-ansible.builtin.service_facts Invoked
Dec  4 05:30:58 np0005545273 network[217539]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  4 05:30:58 np0005545273 network[217540]: 'network-scripts' will be removed from distribution in near future.
Dec  4 05:30:58 np0005545273 network[217541]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  4 05:30:59 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v604: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:31:01 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v605: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:31:02 np0005545273 python3.9[217815]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  4 05:31:03 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v606: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:31:03 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:31:03 np0005545273 python3.9[217967]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Dec  4 05:31:04 np0005545273 python3.9[218123]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:31:05 np0005545273 python3.9[218246]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764844263.9778721-172-131977077771746/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:31:05 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v607: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:31:05 np0005545273 python3.9[218398]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:31:07 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v608: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:31:07 np0005545273 python3.9[218550]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  4 05:31:07 np0005545273 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec  4 05:31:07 np0005545273 systemd[1]: Stopped Load Kernel Modules.
Dec  4 05:31:07 np0005545273 systemd[1]: Stopping Load Kernel Modules...
Dec  4 05:31:07 np0005545273 systemd[1]: Starting Load Kernel Modules...
Dec  4 05:31:07 np0005545273 systemd[1]: Finished Load Kernel Modules.
Dec  4 05:31:07 np0005545273 python3.9[218706]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:31:08 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:31:08 np0005545273 python3.9[218858]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:31:09 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v609: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:31:09 np0005545273 python3.9[219010]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:31:10 np0005545273 python3.9[219162]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:31:10 np0005545273 python3.9[219285]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764844269.6544032-230-205082806825033/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:31:11 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v610: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:31:11 np0005545273 python3.9[219437]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:31:12 np0005545273 python3.9[219590]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:31:12 np0005545273 python3.9[219742]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:31:13 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v611: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:31:13 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:31:13 np0005545273 python3.9[219894]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:31:14 np0005545273 python3.9[220046]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:31:14 np0005545273 python3.9[220198]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:31:15 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v612: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:31:15 np0005545273 python3.9[220350]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:31:16 np0005545273 python3.9[220502]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:31:17 np0005545273 python3.9[220654]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:31:17 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v613: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:31:17 np0005545273 python3.9[220810]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:31:18 np0005545273 python3.9[220962]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:31:18 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:31:19 np0005545273 python3.9[221164]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:31:19 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v614: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:31:19 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:31:19 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:31:19 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:31:19 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:31:19 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:31:19 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:31:19 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:31:19 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:31:19 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:31:19 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:31:19 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:31:19 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:31:19 np0005545273 python3.9[221273]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:31:19 np0005545273 podman[221388]: 2025-12-04 10:31:19.721084153 +0000 UTC m=+0.048012179 container create ed503a298f14f4c88c52f1dd6d21285082009ca20e87722fd6a8a93c39f3927f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:31:19 np0005545273 systemd[1]: Started libpod-conmon-ed503a298f14f4c88c52f1dd6d21285082009ca20e87722fd6a8a93c39f3927f.scope.
Dec  4 05:31:19 np0005545273 podman[221388]: 2025-12-04 10:31:19.698929343 +0000 UTC m=+0.025857419 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:31:19 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:31:19 np0005545273 podman[221388]: 2025-12-04 10:31:19.82243893 +0000 UTC m=+0.149366986 container init ed503a298f14f4c88c52f1dd6d21285082009ca20e87722fd6a8a93c39f3927f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  4 05:31:19 np0005545273 podman[221388]: 2025-12-04 10:31:19.830629818 +0000 UTC m=+0.157557844 container start ed503a298f14f4c88c52f1dd6d21285082009ca20e87722fd6a8a93c39f3927f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_tesla, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec  4 05:31:19 np0005545273 goofy_tesla[221452]: 167 167
Dec  4 05:31:19 np0005545273 systemd[1]: libpod-ed503a298f14f4c88c52f1dd6d21285082009ca20e87722fd6a8a93c39f3927f.scope: Deactivated successfully.
Dec  4 05:31:19 np0005545273 podman[221388]: 2025-12-04 10:31:19.966392212 +0000 UTC m=+0.293320328 container attach ed503a298f14f4c88c52f1dd6d21285082009ca20e87722fd6a8a93c39f3927f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_tesla, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:31:19 np0005545273 podman[221388]: 2025-12-04 10:31:19.966969237 +0000 UTC m=+0.293897293 container died ed503a298f14f4c88c52f1dd6d21285082009ca20e87722fd6a8a93c39f3927f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_tesla, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:31:20 np0005545273 python3.9[221519]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:31:20 np0005545273 python3.9[221598]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:31:20 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:31:20 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:31:20 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:31:20 np0005545273 systemd[1]: var-lib-containers-storage-overlay-f459fb76c30b12303870fa4bc77d8bbc44a51ad15d967f8eec075e964ba98dd2-merged.mount: Deactivated successfully.
Dec  4 05:31:20 np0005545273 podman[221388]: 2025-12-04 10:31:20.555389106 +0000 UTC m=+0.882317142 container remove ed503a298f14f4c88c52f1dd6d21285082009ca20e87722fd6a8a93c39f3927f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_tesla, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  4 05:31:20 np0005545273 systemd[1]: libpod-conmon-ed503a298f14f4c88c52f1dd6d21285082009ca20e87722fd6a8a93c39f3927f.scope: Deactivated successfully.
Dec  4 05:31:20 np0005545273 podman[221654]: 2025-12-04 10:31:20.745917752 +0000 UTC m=+0.050780837 container create c426bed4b5fa303096a2938d004afbc5e7c28f46bf1d0947afd5db1acb54b748 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lederberg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:31:20 np0005545273 systemd[1]: Started libpod-conmon-c426bed4b5fa303096a2938d004afbc5e7c28f46bf1d0947afd5db1acb54b748.scope.
Dec  4 05:31:20 np0005545273 podman[221654]: 2025-12-04 10:31:20.7252898 +0000 UTC m=+0.030152895 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:31:20 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:31:20 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3e937db96777b70226dab0ca9ff0661897aa2598ec4cc3dcd509dd1ea27668f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:31:20 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3e937db96777b70226dab0ca9ff0661897aa2598ec4cc3dcd509dd1ea27668f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:31:20 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3e937db96777b70226dab0ca9ff0661897aa2598ec4cc3dcd509dd1ea27668f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:31:20 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3e937db96777b70226dab0ca9ff0661897aa2598ec4cc3dcd509dd1ea27668f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:31:20 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3e937db96777b70226dab0ca9ff0661897aa2598ec4cc3dcd509dd1ea27668f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:31:20 np0005545273 podman[221654]: 2025-12-04 10:31:20.900196396 +0000 UTC m=+0.205059491 container init c426bed4b5fa303096a2938d004afbc5e7c28f46bf1d0947afd5db1acb54b748 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lederberg, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:31:20 np0005545273 podman[221654]: 2025-12-04 10:31:20.910943158 +0000 UTC m=+0.215806233 container start c426bed4b5fa303096a2938d004afbc5e7c28f46bf1d0947afd5db1acb54b748 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:31:20 np0005545273 podman[221654]: 2025-12-04 10:31:20.915918089 +0000 UTC m=+0.220781194 container attach c426bed4b5fa303096a2938d004afbc5e7c28f46bf1d0947afd5db1acb54b748 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lederberg, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:31:21 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v615: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:31:21 np0005545273 python3.9[221780]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:31:21 np0005545273 thirsty_lederberg[221723]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:31:21 np0005545273 thirsty_lederberg[221723]: --> All data devices are unavailable
Dec  4 05:31:21 np0005545273 systemd[1]: libpod-c426bed4b5fa303096a2938d004afbc5e7c28f46bf1d0947afd5db1acb54b748.scope: Deactivated successfully.
Dec  4 05:31:21 np0005545273 podman[221654]: 2025-12-04 10:31:21.405255417 +0000 UTC m=+0.710118522 container died c426bed4b5fa303096a2938d004afbc5e7c28f46bf1d0947afd5db1acb54b748 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lederberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:31:21 np0005545273 systemd[1]: var-lib-containers-storage-overlay-c3e937db96777b70226dab0ca9ff0661897aa2598ec4cc3dcd509dd1ea27668f-merged.mount: Deactivated successfully.
Dec  4 05:31:21 np0005545273 podman[221654]: 2025-12-04 10:31:21.569188706 +0000 UTC m=+0.874051781 container remove c426bed4b5fa303096a2938d004afbc5e7c28f46bf1d0947afd5db1acb54b748 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lederberg, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:31:21 np0005545273 systemd[1]: libpod-conmon-c426bed4b5fa303096a2938d004afbc5e7c28f46bf1d0947afd5db1acb54b748.scope: Deactivated successfully.
Dec  4 05:31:21 np0005545273 python3.9[221961]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:31:22 np0005545273 podman[222049]: 2025-12-04 10:31:22.004660703 +0000 UTC m=+0.024991379 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:31:22 np0005545273 podman[222049]: 2025-12-04 10:31:22.14750858 +0000 UTC m=+0.167839276 container create a91190cb94d5bcdeb686e6d3aebc2995e558a72248ee54b9993287fa48336f86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  4 05:31:22 np0005545273 python3.9[222115]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:31:22 np0005545273 systemd[1]: Started libpod-conmon-a91190cb94d5bcdeb686e6d3aebc2995e558a72248ee54b9993287fa48336f86.scope.
Dec  4 05:31:22 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:31:22 np0005545273 podman[222049]: 2025-12-04 10:31:22.330451162 +0000 UTC m=+0.350781828 container init a91190cb94d5bcdeb686e6d3aebc2995e558a72248ee54b9993287fa48336f86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  4 05:31:22 np0005545273 podman[222049]: 2025-12-04 10:31:22.33980365 +0000 UTC m=+0.360134316 container start a91190cb94d5bcdeb686e6d3aebc2995e558a72248ee54b9993287fa48336f86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec  4 05:31:22 np0005545273 podman[222049]: 2025-12-04 10:31:22.344446502 +0000 UTC m=+0.364777168 container attach a91190cb94d5bcdeb686e6d3aebc2995e558a72248ee54b9993287fa48336f86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_hellman, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:31:22 np0005545273 loving_hellman[222118]: 167 167
Dec  4 05:31:22 np0005545273 systemd[1]: libpod-a91190cb94d5bcdeb686e6d3aebc2995e558a72248ee54b9993287fa48336f86.scope: Deactivated successfully.
Dec  4 05:31:22 np0005545273 podman[222049]: 2025-12-04 10:31:22.345449507 +0000 UTC m=+0.365780173 container died a91190cb94d5bcdeb686e6d3aebc2995e558a72248ee54b9993287fa48336f86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_hellman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:31:22 np0005545273 systemd[1]: var-lib-containers-storage-overlay-704a924a311c7ef4cdf52b4584ecdb1c2acbfbc91161118ce05740ec510ce183-merged.mount: Deactivated successfully.
Dec  4 05:31:22 np0005545273 podman[222049]: 2025-12-04 10:31:22.458706222 +0000 UTC m=+0.479036888 container remove a91190cb94d5bcdeb686e6d3aebc2995e558a72248ee54b9993287fa48336f86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:31:22 np0005545273 systemd[1]: libpod-conmon-a91190cb94d5bcdeb686e6d3aebc2995e558a72248ee54b9993287fa48336f86.scope: Deactivated successfully.
Dec  4 05:31:22 np0005545273 podman[222227]: 2025-12-04 10:31:22.671892931 +0000 UTC m=+0.103407948 container create a0bf05dc6b4784565cd55501b7502ed80723b2ea04f8d459c4a252b618bbc7be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_nash, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:31:22 np0005545273 podman[222227]: 2025-12-04 10:31:22.591191377 +0000 UTC m=+0.022706424 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:31:22 np0005545273 systemd[1]: Started libpod-conmon-a0bf05dc6b4784565cd55501b7502ed80723b2ea04f8d459c4a252b618bbc7be.scope.
Dec  4 05:31:22 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:31:22 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/037dd14b0105c14c6128e5d9b71680e053c7c3b57f6dfd0f309422f6fdf7a98c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:31:22 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/037dd14b0105c14c6128e5d9b71680e053c7c3b57f6dfd0f309422f6fdf7a98c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:31:22 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/037dd14b0105c14c6128e5d9b71680e053c7c3b57f6dfd0f309422f6fdf7a98c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:31:22 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/037dd14b0105c14c6128e5d9b71680e053c7c3b57f6dfd0f309422f6fdf7a98c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:31:22 np0005545273 podman[222227]: 2025-12-04 10:31:22.754960642 +0000 UTC m=+0.186475689 container init a0bf05dc6b4784565cd55501b7502ed80723b2ea04f8d459c4a252b618bbc7be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_nash, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec  4 05:31:22 np0005545273 podman[222227]: 2025-12-04 10:31:22.763170342 +0000 UTC m=+0.194685369 container start a0bf05dc6b4784565cd55501b7502ed80723b2ea04f8d459c4a252b618bbc7be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_nash, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Dec  4 05:31:22 np0005545273 podman[222227]: 2025-12-04 10:31:22.802835857 +0000 UTC m=+0.234350884 container attach a0bf05dc6b4784565cd55501b7502ed80723b2ea04f8d459c4a252b618bbc7be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_nash, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:31:22 np0005545273 python3.9[222314]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]: {
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:    "0": [
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:        {
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            "devices": [
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "/dev/loop3"
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            ],
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            "lv_name": "ceph_lv0",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            "lv_size": "21470642176",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            "name": "ceph_lv0",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            "tags": {
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.cluster_name": "ceph",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.crush_device_class": "",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.encrypted": "0",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.objectstore": "bluestore",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.osd_id": "0",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.type": "block",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.vdo": "0",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.with_tpm": "0"
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            },
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            "type": "block",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            "vg_name": "ceph_vg0"
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:        }
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:    ],
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:    "1": [
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:        {
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            "devices": [
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "/dev/loop4"
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            ],
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            "lv_name": "ceph_lv1",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            "lv_size": "21470642176",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            "name": "ceph_lv1",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            "tags": {
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.cluster_name": "ceph",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.crush_device_class": "",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.encrypted": "0",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.objectstore": "bluestore",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.osd_id": "1",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.type": "block",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.vdo": "0",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.with_tpm": "0"
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            },
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            "type": "block",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            "vg_name": "ceph_vg1"
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:        }
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:    ],
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:    "2": [
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:        {
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            "devices": [
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "/dev/loop5"
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            ],
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            "lv_name": "ceph_lv2",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            "lv_size": "21470642176",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            "name": "ceph_lv2",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            "tags": {
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.cluster_name": "ceph",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.crush_device_class": "",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.encrypted": "0",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.objectstore": "bluestore",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.osd_id": "2",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.type": "block",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.vdo": "0",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:                "ceph.with_tpm": "0"
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            },
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            "type": "block",
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:            "vg_name": "ceph_vg2"
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:        }
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]:    ]
Dec  4 05:31:23 np0005545273 affectionate_nash[222309]: }
Dec  4 05:31:23 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v616: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:31:23 np0005545273 systemd[1]: libpod-a0bf05dc6b4784565cd55501b7502ed80723b2ea04f8d459c4a252b618bbc7be.scope: Deactivated successfully.
Dec  4 05:31:23 np0005545273 podman[222227]: 2025-12-04 10:31:23.087544096 +0000 UTC m=+0.519059113 container died a0bf05dc6b4784565cd55501b7502ed80723b2ea04f8d459c4a252b618bbc7be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:31:23 np0005545273 systemd[1]: var-lib-containers-storage-overlay-037dd14b0105c14c6128e5d9b71680e053c7c3b57f6dfd0f309422f6fdf7a98c-merged.mount: Deactivated successfully.
Dec  4 05:31:23 np0005545273 podman[222227]: 2025-12-04 10:31:23.132611242 +0000 UTC m=+0.564126269 container remove a0bf05dc6b4784565cd55501b7502ed80723b2ea04f8d459c4a252b618bbc7be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_nash, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec  4 05:31:23 np0005545273 systemd[1]: libpod-conmon-a0bf05dc6b4784565cd55501b7502ed80723b2ea04f8d459c4a252b618bbc7be.scope: Deactivated successfully.
Dec  4 05:31:23 np0005545273 python3.9[222408]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:31:23 np0005545273 podman[222513]: 2025-12-04 10:31:23.571354359 +0000 UTC m=+0.039107633 container create a96cb0910ec6e0e0302686a9284b1d9f76e54616d6db923b0bff1eb8aa920665 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_booth, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:31:23 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:31:23 np0005545273 systemd[1]: Started libpod-conmon-a96cb0910ec6e0e0302686a9284b1d9f76e54616d6db923b0bff1eb8aa920665.scope.
Dec  4 05:31:23 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:31:23 np0005545273 podman[222513]: 2025-12-04 10:31:23.649327967 +0000 UTC m=+0.117081241 container init a96cb0910ec6e0e0302686a9284b1d9f76e54616d6db923b0bff1eb8aa920665 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_booth, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:31:23 np0005545273 podman[222513]: 2025-12-04 10:31:23.555091494 +0000 UTC m=+0.022844788 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:31:23 np0005545273 podman[222513]: 2025-12-04 10:31:23.656523302 +0000 UTC m=+0.124276576 container start a96cb0910ec6e0e0302686a9284b1d9f76e54616d6db923b0bff1eb8aa920665 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_booth, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:31:23 np0005545273 clever_booth[222564]: 167 167
Dec  4 05:31:23 np0005545273 podman[222513]: 2025-12-04 10:31:23.662060516 +0000 UTC m=+0.129813820 container attach a96cb0910ec6e0e0302686a9284b1d9f76e54616d6db923b0bff1eb8aa920665 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_booth, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:31:23 np0005545273 systemd[1]: libpod-a96cb0910ec6e0e0302686a9284b1d9f76e54616d6db923b0bff1eb8aa920665.scope: Deactivated successfully.
Dec  4 05:31:23 np0005545273 conmon[222564]: conmon a96cb0910ec6e0e03026 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a96cb0910ec6e0e0302686a9284b1d9f76e54616d6db923b0bff1eb8aa920665.scope/container/memory.events
Dec  4 05:31:23 np0005545273 podman[222513]: 2025-12-04 10:31:23.663538302 +0000 UTC m=+0.131291576 container died a96cb0910ec6e0e0302686a9284b1d9f76e54616d6db923b0bff1eb8aa920665 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_booth, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  4 05:31:23 np0005545273 systemd[1]: var-lib-containers-storage-overlay-44a14274776fb988382a37612240396233c4aafeac8fd13da355d2ac198a5104-merged.mount: Deactivated successfully.
Dec  4 05:31:23 np0005545273 podman[222513]: 2025-12-04 10:31:23.73413496 +0000 UTC m=+0.201888244 container remove a96cb0910ec6e0e0302686a9284b1d9f76e54616d6db923b0bff1eb8aa920665 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_booth, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:31:23 np0005545273 systemd[1]: libpod-conmon-a96cb0910ec6e0e0302686a9284b1d9f76e54616d6db923b0bff1eb8aa920665.scope: Deactivated successfully.
Dec  4 05:31:23 np0005545273 podman[222663]: 2025-12-04 10:31:23.897197538 +0000 UTC m=+0.046983084 container create c0c2535cda0460d5a0a5f8962a6c2e4713ace193b9d056006e0de1b765ad0193 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_khorana, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  4 05:31:23 np0005545273 systemd[1]: Started libpod-conmon-c0c2535cda0460d5a0a5f8962a6c2e4713ace193b9d056006e0de1b765ad0193.scope.
Dec  4 05:31:23 np0005545273 podman[222663]: 2025-12-04 10:31:23.87840214 +0000 UTC m=+0.028187696 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:31:23 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:31:23 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e01e22cfaa638f99ef1160dfe47a1f02acf742fec656a2772bf5cec231daa9ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:31:23 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e01e22cfaa638f99ef1160dfe47a1f02acf742fec656a2772bf5cec231daa9ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:31:23 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e01e22cfaa638f99ef1160dfe47a1f02acf742fec656a2772bf5cec231daa9ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:31:23 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e01e22cfaa638f99ef1160dfe47a1f02acf742fec656a2772bf5cec231daa9ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:31:23 np0005545273 podman[222663]: 2025-12-04 10:31:23.990699904 +0000 UTC m=+0.140485460 container init c0c2535cda0460d5a0a5f8962a6c2e4713ace193b9d056006e0de1b765ad0193 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_khorana, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:31:23 np0005545273 podman[222663]: 2025-12-04 10:31:23.99628356 +0000 UTC m=+0.146069106 container start c0c2535cda0460d5a0a5f8962a6c2e4713ace193b9d056006e0de1b765ad0193 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_khorana, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  4 05:31:23 np0005545273 podman[222663]: 2025-12-04 10:31:23.999825986 +0000 UTC m=+0.149611532 container attach c0c2535cda0460d5a0a5f8962a6c2e4713ace193b9d056006e0de1b765ad0193 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:31:24 np0005545273 python3.9[222657]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:31:24 np0005545273 systemd[1]: Reloading.
Dec  4 05:31:24 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:31:24 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:31:24 np0005545273 lvm[222843]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:31:24 np0005545273 lvm[222844]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:31:24 np0005545273 lvm[222844]: VG ceph_vg1 finished
Dec  4 05:31:24 np0005545273 lvm[222843]: VG ceph_vg0 finished
Dec  4 05:31:24 np0005545273 lvm[222855]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:31:24 np0005545273 lvm[222855]: VG ceph_vg2 finished
Dec  4 05:31:24 np0005545273 dazzling_khorana[222680]: {}
Dec  4 05:31:24 np0005545273 podman[222663]: 2025-12-04 10:31:24.816610862 +0000 UTC m=+0.966396428 container died c0c2535cda0460d5a0a5f8962a6c2e4713ace193b9d056006e0de1b765ad0193 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:31:24 np0005545273 systemd[1]: libpod-c0c2535cda0460d5a0a5f8962a6c2e4713ace193b9d056006e0de1b765ad0193.scope: Deactivated successfully.
Dec  4 05:31:24 np0005545273 systemd[1]: libpod-c0c2535cda0460d5a0a5f8962a6c2e4713ace193b9d056006e0de1b765ad0193.scope: Consumed 1.307s CPU time.
Dec  4 05:31:24 np0005545273 systemd[1]: var-lib-containers-storage-overlay-e01e22cfaa638f99ef1160dfe47a1f02acf742fec656a2772bf5cec231daa9ef-merged.mount: Deactivated successfully.
Dec  4 05:31:24 np0005545273 podman[222663]: 2025-12-04 10:31:24.86788717 +0000 UTC m=+1.017672716 container remove c0c2535cda0460d5a0a5f8962a6c2e4713ace193b9d056006e0de1b765ad0193 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_khorana, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:31:24 np0005545273 systemd[1]: libpod-conmon-c0c2535cda0460d5a0a5f8962a6c2e4713ace193b9d056006e0de1b765ad0193.scope: Deactivated successfully.
Dec  4 05:31:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:31:24 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:31:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:31:24 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:31:25 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v617: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:31:25 np0005545273 python3.9[222964]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:31:25 np0005545273 podman[223039]: 2025-12-04 10:31:25.413822175 +0000 UTC m=+0.091327673 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller)
Dec  4 05:31:25 np0005545273 python3.9[223086]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:31:25 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:31:25 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:31:26 np0005545273 python3.9[223245]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:31:26 np0005545273 podman[223295]: 2025-12-04 10:31:26.499905565 +0000 UTC m=+0.049742542 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  4 05:31:26 np0005545273 python3.9[223343]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:31:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:31:26
Dec  4 05:31:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:31:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:31:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'volumes', 'vms']
Dec  4 05:31:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:31:27 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v618: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:31:27 np0005545273 python3.9[223495]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:31:27 np0005545273 systemd[1]: Reloading.
Dec  4 05:31:27 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:31:27 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:31:27 np0005545273 systemd[1]: Starting Create netns directory...
Dec  4 05:31:27 np0005545273 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  4 05:31:27 np0005545273 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  4 05:31:27 np0005545273 systemd[1]: Finished Create netns directory.
Dec  4 05:31:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:31:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:31:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:31:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:31:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:31:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:31:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:31:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:31:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:31:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:31:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:31:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:31:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:31:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:31:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:31:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:31:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:31:28 np0005545273 python3.9[223688]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:31:29 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v619: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:31:29 np0005545273 python3.9[223840]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:31:30 np0005545273 python3.9[223963]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764844289.018317-437-14735958788773/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:31:30 np0005545273 python3.9[224115]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:31:31 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v620: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:31:31 np0005545273 python3.9[224267]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:31:32 np0005545273 python3.9[224390]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764844290.9734116-462-208817445594135/.source.json _original_basename=.g71v3mer follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:31:32 np0005545273 python3.9[224542]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:31:33 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v621: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:31:33 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:31:34 np0005545273 python3.9[224969]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Dec  4 05:31:35 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v622: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:31:35 np0005545273 python3.9[225121]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  4 05:31:36 np0005545273 python3.9[225273]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  4 05:31:37 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v623: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:31:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:31:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:31:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:31:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:31:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:31:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:31:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:31:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:31:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:31:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:31:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:31:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:31:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec  4 05:31:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:31:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:31:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:31:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:31:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:31:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 05:31:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:31:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:31:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:31:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 05:31:38 np0005545273 python3[225450]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  4 05:31:38 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:31:39 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v624: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:31:39 np0005545273 podman[225463]: 2025-12-04 10:31:39.514373472 +0000 UTC m=+1.104218003 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  4 05:31:39 np0005545273 podman[225520]: 2025-12-04 10:31:39.677948112 +0000 UTC m=+0.054413785 container create fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  4 05:31:39 np0005545273 podman[225520]: 2025-12-04 10:31:39.649416617 +0000 UTC m=+0.025882340 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  4 05:31:39 np0005545273 python3[225450]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  4 05:31:40 np0005545273 python3.9[225710]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:31:41 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v625: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:31:41 np0005545273 python3.9[225864]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:31:41 np0005545273 python3.9[225940]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:31:42 np0005545273 python3.9[226091]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764844301.651559-550-118408734241082/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:31:42 np0005545273 python3.9[226167]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  4 05:31:42 np0005545273 systemd[1]: Reloading.
Dec  4 05:31:43 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:31:43 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:31:43 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v626: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:31:43 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:31:43 np0005545273 python3.9[226278]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:31:43 np0005545273 systemd[1]: Reloading.
Dec  4 05:31:43 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:31:43 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:31:44 np0005545273 systemd[1]: Starting multipathd container...
Dec  4 05:31:44 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:31:44 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f13740d34036760c31babf8991605527f17e863a29bcf31642e103f5e7ec4670/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  4 05:31:44 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f13740d34036760c31babf8991605527f17e863a29bcf31642e103f5e7ec4670/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  4 05:31:44 np0005545273 systemd[1]: Started /usr/bin/podman healthcheck run fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4.
Dec  4 05:31:44 np0005545273 podman[226318]: 2025-12-04 10:31:44.324224649 +0000 UTC m=+0.119327344 container init fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  4 05:31:44 np0005545273 multipathd[226333]: + sudo -E kolla_set_configs
Dec  4 05:31:44 np0005545273 podman[226318]: 2025-12-04 10:31:44.350945369 +0000 UTC m=+0.146048054 container start fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  4 05:31:44 np0005545273 podman[226318]: multipathd
Dec  4 05:31:44 np0005545273 systemd[1]: Started multipathd container.
Dec  4 05:31:44 np0005545273 multipathd[226333]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  4 05:31:44 np0005545273 multipathd[226333]: INFO:__main__:Validating config file
Dec  4 05:31:44 np0005545273 multipathd[226333]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  4 05:31:44 np0005545273 multipathd[226333]: INFO:__main__:Writing out command to execute
Dec  4 05:31:44 np0005545273 multipathd[226333]: ++ cat /run_command
Dec  4 05:31:44 np0005545273 multipathd[226333]: + CMD='/usr/sbin/multipathd -d'
Dec  4 05:31:44 np0005545273 multipathd[226333]: + ARGS=
Dec  4 05:31:44 np0005545273 multipathd[226333]: + sudo kolla_copy_cacerts
Dec  4 05:31:44 np0005545273 multipathd[226333]: + [[ ! -n '' ]]
Dec  4 05:31:44 np0005545273 multipathd[226333]: + . kolla_extend_start
Dec  4 05:31:44 np0005545273 multipathd[226333]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec  4 05:31:44 np0005545273 multipathd[226333]: Running command: '/usr/sbin/multipathd -d'
Dec  4 05:31:44 np0005545273 multipathd[226333]: + umask 0022
Dec  4 05:31:44 np0005545273 multipathd[226333]: + exec /usr/sbin/multipathd -d
Dec  4 05:31:44 np0005545273 podman[226340]: 2025-12-04 10:31:44.451122038 +0000 UTC m=+0.087768858 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  4 05:31:44 np0005545273 multipathd[226333]: 3414.098598 | --------start up--------
Dec  4 05:31:44 np0005545273 multipathd[226333]: 3414.098621 | read /etc/multipath.conf
Dec  4 05:31:44 np0005545273 systemd[1]: fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4-199efdf6cc04dfc4.service: Main process exited, code=exited, status=1/FAILURE
Dec  4 05:31:44 np0005545273 systemd[1]: fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4-199efdf6cc04dfc4.service: Failed with result 'exit-code'.
Dec  4 05:31:44 np0005545273 multipathd[226333]: 3414.105702 | path checkers start up
Dec  4 05:31:44 np0005545273 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  4 05:31:44 np0005545273 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  4 05:31:45 np0005545273 python3.9[226522]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:31:45 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v627: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:31:45 np0005545273 python3.9[226676]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:31:46 np0005545273 python3.9[226841]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  4 05:31:46 np0005545273 systemd[1]: Stopping multipathd container...
Dec  4 05:31:46 np0005545273 multipathd[226333]: 3416.375332 | exit (signal)
Dec  4 05:31:46 np0005545273 multipathd[226333]: 3416.375444 | --------shut down-------
Dec  4 05:31:46 np0005545273 systemd[1]: libpod-fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4.scope: Deactivated successfully.
Dec  4 05:31:46 np0005545273 conmon[226333]: conmon fe10987cdf96bb2ef3a6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4.scope/container/memory.events
Dec  4 05:31:46 np0005545273 podman[226845]: 2025-12-04 10:31:46.762521826 +0000 UTC m=+0.080664635 container stop fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:31:46 np0005545273 podman[226845]: 2025-12-04 10:31:46.793294245 +0000 UTC m=+0.111437074 container died fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd)
Dec  4 05:31:46 np0005545273 systemd[1]: fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4-199efdf6cc04dfc4.timer: Deactivated successfully.
Dec  4 05:31:46 np0005545273 systemd[1]: Stopped /usr/bin/podman healthcheck run fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4.
Dec  4 05:31:46 np0005545273 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4-userdata-shm.mount: Deactivated successfully.
Dec  4 05:31:46 np0005545273 systemd[1]: var-lib-containers-storage-overlay-f13740d34036760c31babf8991605527f17e863a29bcf31642e103f5e7ec4670-merged.mount: Deactivated successfully.
Dec  4 05:31:47 np0005545273 podman[226845]: 2025-12-04 10:31:47.036441991 +0000 UTC m=+0.354584810 container cleanup fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  4 05:31:47 np0005545273 podman[226845]: multipathd
Dec  4 05:31:47 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v628: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:31:47 np0005545273 podman[226874]: multipathd
Dec  4 05:31:47 np0005545273 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Dec  4 05:31:47 np0005545273 systemd[1]: Stopped multipathd container.
Dec  4 05:31:47 np0005545273 systemd[1]: Starting multipathd container...
Dec  4 05:31:47 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:31:47 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f13740d34036760c31babf8991605527f17e863a29bcf31642e103f5e7ec4670/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  4 05:31:47 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f13740d34036760c31babf8991605527f17e863a29bcf31642e103f5e7ec4670/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  4 05:31:47 np0005545273 systemd[1]: Started /usr/bin/podman healthcheck run fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4.
Dec  4 05:31:47 np0005545273 podman[226887]: 2025-12-04 10:31:47.268331534 +0000 UTC m=+0.119512410 container init fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  4 05:31:47 np0005545273 multipathd[226903]: + sudo -E kolla_set_configs
Dec  4 05:31:47 np0005545273 podman[226887]: 2025-12-04 10:31:47.293728902 +0000 UTC m=+0.144909728 container start fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  4 05:31:47 np0005545273 podman[226887]: multipathd
Dec  4 05:31:47 np0005545273 systemd[1]: Started multipathd container.
Dec  4 05:31:47 np0005545273 multipathd[226903]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  4 05:31:47 np0005545273 multipathd[226903]: INFO:__main__:Validating config file
Dec  4 05:31:47 np0005545273 multipathd[226903]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  4 05:31:47 np0005545273 multipathd[226903]: INFO:__main__:Writing out command to execute
Dec  4 05:31:47 np0005545273 multipathd[226903]: ++ cat /run_command
Dec  4 05:31:47 np0005545273 podman[226910]: 2025-12-04 10:31:47.368886841 +0000 UTC m=+0.065882524 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3)
Dec  4 05:31:47 np0005545273 multipathd[226903]: + CMD='/usr/sbin/multipathd -d'
Dec  4 05:31:47 np0005545273 multipathd[226903]: + ARGS=
Dec  4 05:31:47 np0005545273 multipathd[226903]: + sudo kolla_copy_cacerts
Dec  4 05:31:47 np0005545273 systemd[1]: fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4-2e948bfe886be7a5.service: Main process exited, code=exited, status=1/FAILURE
Dec  4 05:31:47 np0005545273 systemd[1]: fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4-2e948bfe886be7a5.service: Failed with result 'exit-code'.
Dec  4 05:31:47 np0005545273 multipathd[226903]: + [[ ! -n '' ]]
Dec  4 05:31:47 np0005545273 multipathd[226903]: + . kolla_extend_start
Dec  4 05:31:47 np0005545273 multipathd[226903]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec  4 05:31:47 np0005545273 multipathd[226903]: Running command: '/usr/sbin/multipathd -d'
Dec  4 05:31:47 np0005545273 multipathd[226903]: + umask 0022
Dec  4 05:31:47 np0005545273 multipathd[226903]: + exec /usr/sbin/multipathd -d
Dec  4 05:31:47 np0005545273 multipathd[226903]: 3417.046486 | --------start up--------
Dec  4 05:31:47 np0005545273 multipathd[226903]: 3417.046509 | read /etc/multipath.conf
Dec  4 05:31:47 np0005545273 multipathd[226903]: 3417.052272 | path checkers start up
Dec  4 05:31:47 np0005545273 python3.9[227094]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:31:48 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:31:48 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Dec  4 05:31:48 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:31:48.599905) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  4 05:31:48 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Dec  4 05:31:48 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844308600121, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1744, "num_deletes": 250, "total_data_size": 2956038, "memory_usage": 2991552, "flush_reason": "Manual Compaction"}
Dec  4 05:31:48 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Dec  4 05:31:48 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844308612843, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1670440, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11794, "largest_seqno": 13537, "table_properties": {"data_size": 1664710, "index_size": 2869, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14340, "raw_average_key_size": 20, "raw_value_size": 1652065, "raw_average_value_size": 2317, "num_data_blocks": 132, "num_entries": 713, "num_filter_entries": 713, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764844111, "oldest_key_time": 1764844111, "file_creation_time": 1764844308, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:31:48 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 12979 microseconds, and 4850 cpu microseconds.
Dec  4 05:31:48 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  4 05:31:48 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:31:48.612899) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1670440 bytes OK
Dec  4 05:31:48 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:31:48.612928) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Dec  4 05:31:48 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:31:48.614986) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Dec  4 05:31:48 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:31:48.615001) EVENT_LOG_v1 {"time_micros": 1764844308614997, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  4 05:31:48 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:31:48.615021) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  4 05:31:48 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2948620, prev total WAL file size 2948620, number of live WAL files 2.
Dec  4 05:31:48 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:31:48 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:31:48.615799) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353032' seq:0, type:0; will stop at (end)
Dec  4 05:31:48 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  4 05:31:48 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1631KB)], [29(7867KB)]
Dec  4 05:31:48 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844308615887, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9726619, "oldest_snapshot_seqno": -1}
Dec  4 05:31:48 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4041 keys, 7623222 bytes, temperature: kUnknown
Dec  4 05:31:48 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844308693462, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7623222, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7594722, "index_size": 17318, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 96276, "raw_average_key_size": 23, "raw_value_size": 7520371, "raw_average_value_size": 1861, "num_data_blocks": 754, "num_entries": 4041, "num_filter_entries": 4041, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 1764844308, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:31:48 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  4 05:31:48 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:31:48.693786) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7623222 bytes
Dec  4 05:31:48 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:31:48.695330) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 125.2 rd, 98.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.7 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(10.4) write-amplify(4.6) OK, records in: 4460, records dropped: 419 output_compression: NoCompression
Dec  4 05:31:48 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:31:48.695347) EVENT_LOG_v1 {"time_micros": 1764844308695339, "job": 12, "event": "compaction_finished", "compaction_time_micros": 77711, "compaction_time_cpu_micros": 18094, "output_level": 6, "num_output_files": 1, "total_output_size": 7623222, "num_input_records": 4460, "num_output_records": 4041, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  4 05:31:48 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:31:48 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844308695686, "job": 12, "event": "table_file_deletion", "file_number": 31}
Dec  4 05:31:48 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:31:48 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844308696731, "job": 12, "event": "table_file_deletion", "file_number": 29}
Dec  4 05:31:48 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:31:48.615680) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:31:48 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:31:48.696782) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:31:48 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:31:48.696786) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:31:48 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:31:48.696788) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:31:48 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:31:48.696790) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:31:48 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:31:48.696791) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:31:48 np0005545273 python3.9[227246]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  4 05:31:49 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v629: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:31:49 np0005545273 python3.9[227398]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Dec  4 05:31:49 np0005545273 kernel: Key type psk registered
Dec  4 05:31:50 np0005545273 python3.9[227562]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:31:50 np0005545273 python3.9[227685]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764844309.708103-630-158719632535457/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:31:51 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v630: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:31:51 np0005545273 python3.9[227837]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:31:52 np0005545273 python3.9[227989]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  4 05:31:52 np0005545273 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec  4 05:31:52 np0005545273 systemd[1]: Stopped Load Kernel Modules.
Dec  4 05:31:52 np0005545273 systemd[1]: Stopping Load Kernel Modules...
Dec  4 05:31:52 np0005545273 systemd[1]: Starting Load Kernel Modules...
Dec  4 05:31:52 np0005545273 systemd[1]: Finished Load Kernel Modules.
Dec  4 05:31:53 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v631: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:31:53 np0005545273 python3.9[228145]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  4 05:31:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:31:54 np0005545273 systemd[1]: virtnodedevd.service: Deactivated successfully.
Dec  4 05:31:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:31:54.897 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:31:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:31:54.898 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:31:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:31:54.898 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:31:55 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v632: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:31:55 np0005545273 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  4 05:31:55 np0005545273 podman[228151]: 2025-12-04 10:31:55.756618336 +0000 UTC m=+0.129403800 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Dec  4 05:31:56 np0005545273 podman[228179]: 2025-12-04 10:31:56.939170323 +0000 UTC m=+0.052666973 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  4 05:31:57 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v633: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s
Dec  4 05:31:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:31:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:31:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:31:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:31:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:31:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:31:58 np0005545273 systemd[1]: Reloading.
Dec  4 05:31:58 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:31:58 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:31:58 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:31:58 np0005545273 systemd[1]: Reloading.
Dec  4 05:31:58 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:31:58 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:31:59 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v634: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec  4 05:31:59 np0005545273 systemd-logind[798]: Watching system buttons on /dev/input/event0 (Power Button)
Dec  4 05:31:59 np0005545273 systemd-logind[798]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec  4 05:31:59 np0005545273 lvm[228310]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:31:59 np0005545273 lvm[228310]: VG ceph_vg0 finished
Dec  4 05:31:59 np0005545273 lvm[228311]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:31:59 np0005545273 lvm[228311]: VG ceph_vg1 finished
Dec  4 05:31:59 np0005545273 lvm[228313]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:31:59 np0005545273 lvm[228313]: VG ceph_vg2 finished
Dec  4 05:31:59 np0005545273 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  4 05:31:59 np0005545273 systemd[1]: Starting man-db-cache-update.service...
Dec  4 05:31:59 np0005545273 systemd[1]: Reloading.
Dec  4 05:31:59 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:31:59 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:31:59 np0005545273 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  4 05:32:00 np0005545273 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  4 05:32:00 np0005545273 systemd[1]: Finished man-db-cache-update.service.
Dec  4 05:32:00 np0005545273 systemd[1]: man-db-cache-update.service: Consumed 1.599s CPU time.
Dec  4 05:32:00 np0005545273 systemd[1]: run-rb576fe760eba4316866b9652e39ad915.service: Deactivated successfully.
Dec  4 05:32:01 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v635: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec  4 05:32:01 np0005545273 python3.9[229657]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  4 05:32:01 np0005545273 systemd[1]: Stopping Open-iSCSI...
Dec  4 05:32:01 np0005545273 iscsid[217362]: iscsid shutting down.
Dec  4 05:32:01 np0005545273 systemd[1]: iscsid.service: Deactivated successfully.
Dec  4 05:32:01 np0005545273 systemd[1]: Stopped Open-iSCSI.
Dec  4 05:32:01 np0005545273 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec  4 05:32:01 np0005545273 systemd[1]: Starting Open-iSCSI...
Dec  4 05:32:01 np0005545273 systemd[1]: Started Open-iSCSI.
Dec  4 05:32:02 np0005545273 python3.9[229811]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 05:32:03 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v636: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec  4 05:32:03 np0005545273 python3.9[229967]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:32:03 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:32:04 np0005545273 python3.9[230119]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  4 05:32:04 np0005545273 systemd[1]: Reloading.
Dec  4 05:32:04 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:32:04 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:32:05 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v637: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec  4 05:32:05 np0005545273 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  4 05:32:05 np0005545273 systemd[1]: virtqemud.service: Deactivated successfully.
Dec  4 05:32:05 np0005545273 python3.9[230306]: ansible-ansible.builtin.service_facts Invoked
Dec  4 05:32:05 np0005545273 network[230323]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  4 05:32:05 np0005545273 network[230324]: 'network-scripts' will be removed from distribution in near future.
Dec  4 05:32:05 np0005545273 network[230325]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  4 05:32:07 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v638: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec  4 05:32:08 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:32:09 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v639: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 9 op/s
Dec  4 05:32:11 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v640: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:32:11 np0005545273 python3.9[230602]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:32:12 np0005545273 python3.9[230755]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:32:13 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v641: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:32:13 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:32:13 np0005545273 python3.9[230908]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:32:14 np0005545273 python3.9[231061]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:32:15 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v642: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:32:15 np0005545273 python3.9[231214]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:32:15 np0005545273 python3.9[231367]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:32:16 np0005545273 python3.9[231520]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:32:17 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v643: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:32:17 np0005545273 python3.9[231673]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:32:17 np0005545273 podman[231675]: 2025-12-04 10:32:17.619543439 +0000 UTC m=+0.068830826 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  4 05:32:18 np0005545273 python3.9[231846]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:32:18 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:32:19 np0005545273 python3.9[231998]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:32:19 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v644: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:32:19 np0005545273 python3.9[232150]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:32:20 np0005545273 python3.9[232302]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:32:20 np0005545273 python3.9[232454]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:32:21 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v645: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:32:21 np0005545273 python3.9[232606]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:32:22 np0005545273 python3.9[232758]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:32:22 np0005545273 python3.9[232910]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:32:23 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v646: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:32:23 np0005545273 python3.9[233062]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:32:23 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:32:24 np0005545273 python3.9[233214]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:32:24 np0005545273 python3.9[233366]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:32:25 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v647: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:32:25 np0005545273 python3.9[233568]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:32:25 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:32:25 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:32:25 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:32:25 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:32:25 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:32:25 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:32:25 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:32:25 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:32:25 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:32:25 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:32:25 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:32:25 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:32:25 np0005545273 podman[233741]: 2025-12-04 10:32:25.928160388 +0000 UTC m=+0.113247957 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:32:26 np0005545273 python3.9[233817]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:32:26 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:32:26 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:32:26 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:32:26 np0005545273 podman[233851]: 2025-12-04 10:32:26.17767733 +0000 UTC m=+0.052969009 container create a9c399e58152e2a6a5be84b74222722a8ed8a87bc1341cefd7e07ba44cbe4e53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_aryabhata, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:32:26 np0005545273 systemd[1]: Started libpod-conmon-a9c399e58152e2a6a5be84b74222722a8ed8a87bc1341cefd7e07ba44cbe4e53.scope.
Dec  4 05:32:26 np0005545273 podman[233851]: 2025-12-04 10:32:26.151910993 +0000 UTC m=+0.027202722 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:32:26 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:32:26 np0005545273 podman[233851]: 2025-12-04 10:32:26.272708563 +0000 UTC m=+0.148000242 container init a9c399e58152e2a6a5be84b74222722a8ed8a87bc1341cefd7e07ba44cbe4e53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_aryabhata, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:32:26 np0005545273 podman[233851]: 2025-12-04 10:32:26.280682287 +0000 UTC m=+0.155973966 container start a9c399e58152e2a6a5be84b74222722a8ed8a87bc1341cefd7e07ba44cbe4e53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:32:26 np0005545273 podman[233851]: 2025-12-04 10:32:26.284255594 +0000 UTC m=+0.159547273 container attach a9c399e58152e2a6a5be84b74222722a8ed8a87bc1341cefd7e07ba44cbe4e53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_aryabhata, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:32:26 np0005545273 nice_aryabhata[233901]: 167 167
Dec  4 05:32:26 np0005545273 systemd[1]: libpod-a9c399e58152e2a6a5be84b74222722a8ed8a87bc1341cefd7e07ba44cbe4e53.scope: Deactivated successfully.
Dec  4 05:32:26 np0005545273 podman[233851]: 2025-12-04 10:32:26.287426461 +0000 UTC m=+0.162718140 container died a9c399e58152e2a6a5be84b74222722a8ed8a87bc1341cefd7e07ba44cbe4e53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec  4 05:32:26 np0005545273 systemd[1]: var-lib-containers-storage-overlay-604d27f7f4f5d9c3c2f3a73506bec14d21c716c4ebb2f3ab723c22677ce31ce0-merged.mount: Deactivated successfully.
Dec  4 05:32:26 np0005545273 podman[233851]: 2025-12-04 10:32:26.324163325 +0000 UTC m=+0.199455004 container remove a9c399e58152e2a6a5be84b74222722a8ed8a87bc1341cefd7e07ba44cbe4e53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  4 05:32:26 np0005545273 systemd[1]: libpod-conmon-a9c399e58152e2a6a5be84b74222722a8ed8a87bc1341cefd7e07ba44cbe4e53.scope: Deactivated successfully.
Dec  4 05:32:26 np0005545273 podman[233999]: 2025-12-04 10:32:26.505119768 +0000 UTC m=+0.048987823 container create 27e3c5964da47a7c2c89bcc128e4f0b2abcab023776847e8189e9a3e93d397c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:32:26 np0005545273 systemd[1]: Started libpod-conmon-27e3c5964da47a7c2c89bcc128e4f0b2abcab023776847e8189e9a3e93d397c1.scope.
Dec  4 05:32:26 np0005545273 podman[233999]: 2025-12-04 10:32:26.481160515 +0000 UTC m=+0.025028590 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:32:26 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:32:26 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/252b07b2c23b710c86638005d85471be7a282d80ef134424593d3bc238967acf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:32:26 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/252b07b2c23b710c86638005d85471be7a282d80ef134424593d3bc238967acf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:32:26 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/252b07b2c23b710c86638005d85471be7a282d80ef134424593d3bc238967acf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:32:26 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/252b07b2c23b710c86638005d85471be7a282d80ef134424593d3bc238967acf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:32:26 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/252b07b2c23b710c86638005d85471be7a282d80ef134424593d3bc238967acf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:32:26 np0005545273 podman[233999]: 2025-12-04 10:32:26.598910921 +0000 UTC m=+0.142778996 container init 27e3c5964da47a7c2c89bcc128e4f0b2abcab023776847e8189e9a3e93d397c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:32:26 np0005545273 podman[233999]: 2025-12-04 10:32:26.609886338 +0000 UTC m=+0.153754393 container start 27e3c5964da47a7c2c89bcc128e4f0b2abcab023776847e8189e9a3e93d397c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_leakey, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:32:26 np0005545273 podman[233999]: 2025-12-04 10:32:26.616286834 +0000 UTC m=+0.160154889 container attach 27e3c5964da47a7c2c89bcc128e4f0b2abcab023776847e8189e9a3e93d397c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_leakey, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Dec  4 05:32:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:32:26
Dec  4 05:32:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:32:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:32:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'default.rgw.control', 'volumes', 'vms', '.rgw.root', 'backups', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data']
Dec  4 05:32:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:32:26 np0005545273 python3.9[234042]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:32:27 np0005545273 intelligent_leakey[234045]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:32:27 np0005545273 intelligent_leakey[234045]: --> All data devices are unavailable
Dec  4 05:32:27 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v648: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:32:27 np0005545273 systemd[1]: libpod-27e3c5964da47a7c2c89bcc128e4f0b2abcab023776847e8189e9a3e93d397c1.scope: Deactivated successfully.
Dec  4 05:32:27 np0005545273 podman[233999]: 2025-12-04 10:32:27.154922041 +0000 UTC m=+0.698790116 container died 27e3c5964da47a7c2c89bcc128e4f0b2abcab023776847e8189e9a3e93d397c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:32:27 np0005545273 python3.9[234240]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:32:27 np0005545273 systemd[1]: var-lib-containers-storage-overlay-252b07b2c23b710c86638005d85471be7a282d80ef134424593d3bc238967acf-merged.mount: Deactivated successfully.
Dec  4 05:32:27 np0005545273 podman[233999]: 2025-12-04 10:32:27.539660924 +0000 UTC m=+1.083528979 container remove 27e3c5964da47a7c2c89bcc128e4f0b2abcab023776847e8189e9a3e93d397c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_leakey, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:32:27 np0005545273 podman[234190]: 2025-12-04 10:32:27.543366935 +0000 UTC m=+0.406737860 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:32:27 np0005545273 systemd[1]: libpod-conmon-27e3c5964da47a7c2c89bcc128e4f0b2abcab023776847e8189e9a3e93d397c1.scope: Deactivated successfully.
Dec  4 05:32:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:32:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:32:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:32:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:32:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:32:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:32:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:32:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:32:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:32:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:32:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:32:28 np0005545273 podman[234465]: 2025-12-04 10:32:28.022313642 +0000 UTC m=+0.047858519 container create 18a7614af9f1773956065f56a1ae3fb3de91b8b8f407e6d7346e829fb5bbf733 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_wright, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:32:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:32:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:32:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:32:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:32:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:32:28 np0005545273 systemd[1]: Started libpod-conmon-18a7614af9f1773956065f56a1ae3fb3de91b8b8f407e6d7346e829fb5bbf733.scope.
Dec  4 05:32:28 np0005545273 python3.9[234452]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:32:28 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:32:28 np0005545273 podman[234465]: 2025-12-04 10:32:27.997074901 +0000 UTC m=+0.022619798 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:32:28 np0005545273 podman[234465]: 2025-12-04 10:32:28.116462242 +0000 UTC m=+0.142007229 container init 18a7614af9f1773956065f56a1ae3fb3de91b8b8f407e6d7346e829fb5bbf733 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_wright, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:32:28 np0005545273 podman[234465]: 2025-12-04 10:32:28.125591813 +0000 UTC m=+0.151136690 container start 18a7614af9f1773956065f56a1ae3fb3de91b8b8f407e6d7346e829fb5bbf733 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_wright, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True)
Dec  4 05:32:28 np0005545273 podman[234465]: 2025-12-04 10:32:28.129425695 +0000 UTC m=+0.154970692 container attach 18a7614af9f1773956065f56a1ae3fb3de91b8b8f407e6d7346e829fb5bbf733 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_wright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  4 05:32:28 np0005545273 nifty_wright[234482]: 167 167
Dec  4 05:32:28 np0005545273 systemd[1]: libpod-18a7614af9f1773956065f56a1ae3fb3de91b8b8f407e6d7346e829fb5bbf733.scope: Deactivated successfully.
Dec  4 05:32:28 np0005545273 podman[234465]: 2025-12-04 10:32:28.133684298 +0000 UTC m=+0.159229165 container died 18a7614af9f1773956065f56a1ae3fb3de91b8b8f407e6d7346e829fb5bbf733 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_wright, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:32:28 np0005545273 systemd[1]: var-lib-containers-storage-overlay-249efb296f630a14b4e97a0ce5f9451c5f77e1b5e0a174b1f619b77bcc847a48-merged.mount: Deactivated successfully.
Dec  4 05:32:28 np0005545273 podman[234465]: 2025-12-04 10:32:28.183414723 +0000 UTC m=+0.208959620 container remove 18a7614af9f1773956065f56a1ae3fb3de91b8b8f407e6d7346e829fb5bbf733 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec  4 05:32:28 np0005545273 systemd[1]: libpod-conmon-18a7614af9f1773956065f56a1ae3fb3de91b8b8f407e6d7346e829fb5bbf733.scope: Deactivated successfully.
Dec  4 05:32:28 np0005545273 podman[234528]: 2025-12-04 10:32:28.35797661 +0000 UTC m=+0.045321199 container create 906191cfffc572cb16de4b50d84a490ab9dd6fb18b4527c6beac473a465047e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:32:28 np0005545273 systemd[1]: Started libpod-conmon-906191cfffc572cb16de4b50d84a490ab9dd6fb18b4527c6beac473a465047e0.scope.
Dec  4 05:32:28 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:32:28 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27fdf413d1743e46c2fad497e39b7ea707dad65191975241aeeee53cd2ce9e43/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:32:28 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27fdf413d1743e46c2fad497e39b7ea707dad65191975241aeeee53cd2ce9e43/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:32:28 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27fdf413d1743e46c2fad497e39b7ea707dad65191975241aeeee53cd2ce9e43/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:32:28 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27fdf413d1743e46c2fad497e39b7ea707dad65191975241aeeee53cd2ce9e43/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:32:28 np0005545273 podman[234528]: 2025-12-04 10:32:28.338942498 +0000 UTC m=+0.026287117 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:32:28 np0005545273 podman[234528]: 2025-12-04 10:32:28.43770897 +0000 UTC m=+0.125053579 container init 906191cfffc572cb16de4b50d84a490ab9dd6fb18b4527c6beac473a465047e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_saha, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:32:28 np0005545273 podman[234528]: 2025-12-04 10:32:28.446060092 +0000 UTC m=+0.133404681 container start 906191cfffc572cb16de4b50d84a490ab9dd6fb18b4527c6beac473a465047e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  4 05:32:28 np0005545273 podman[234528]: 2025-12-04 10:32:28.449947206 +0000 UTC m=+0.137291815 container attach 906191cfffc572cb16de4b50d84a490ab9dd6fb18b4527c6beac473a465047e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_saha, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  4 05:32:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:32:28 np0005545273 happy_saha[234565]: {
Dec  4 05:32:28 np0005545273 happy_saha[234565]:    "0": [
Dec  4 05:32:28 np0005545273 happy_saha[234565]:        {
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            "devices": [
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "/dev/loop3"
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            ],
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            "lv_name": "ceph_lv0",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            "lv_size": "21470642176",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            "name": "ceph_lv0",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            "tags": {
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.cluster_name": "ceph",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.crush_device_class": "",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.encrypted": "0",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.objectstore": "bluestore",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.osd_id": "0",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.type": "block",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.vdo": "0",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.with_tpm": "0"
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            },
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            "type": "block",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            "vg_name": "ceph_vg0"
Dec  4 05:32:28 np0005545273 happy_saha[234565]:        }
Dec  4 05:32:28 np0005545273 happy_saha[234565]:    ],
Dec  4 05:32:28 np0005545273 happy_saha[234565]:    "1": [
Dec  4 05:32:28 np0005545273 happy_saha[234565]:        {
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            "devices": [
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "/dev/loop4"
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            ],
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            "lv_name": "ceph_lv1",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            "lv_size": "21470642176",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            "name": "ceph_lv1",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            "tags": {
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.cluster_name": "ceph",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.crush_device_class": "",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.encrypted": "0",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.objectstore": "bluestore",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.osd_id": "1",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.type": "block",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.vdo": "0",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.with_tpm": "0"
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            },
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            "type": "block",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            "vg_name": "ceph_vg1"
Dec  4 05:32:28 np0005545273 happy_saha[234565]:        }
Dec  4 05:32:28 np0005545273 happy_saha[234565]:    ],
Dec  4 05:32:28 np0005545273 happy_saha[234565]:    "2": [
Dec  4 05:32:28 np0005545273 happy_saha[234565]:        {
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            "devices": [
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "/dev/loop5"
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            ],
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            "lv_name": "ceph_lv2",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            "lv_size": "21470642176",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            "name": "ceph_lv2",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            "tags": {
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.cluster_name": "ceph",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.crush_device_class": "",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.encrypted": "0",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.objectstore": "bluestore",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.osd_id": "2",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.type": "block",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.vdo": "0",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:                "ceph.with_tpm": "0"
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            },
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            "type": "block",
Dec  4 05:32:28 np0005545273 happy_saha[234565]:            "vg_name": "ceph_vg2"
Dec  4 05:32:28 np0005545273 happy_saha[234565]:        }
Dec  4 05:32:28 np0005545273 happy_saha[234565]:    ]
Dec  4 05:32:28 np0005545273 happy_saha[234565]: }
Dec  4 05:32:28 np0005545273 systemd[1]: libpod-906191cfffc572cb16de4b50d84a490ab9dd6fb18b4527c6beac473a465047e0.scope: Deactivated successfully.
Dec  4 05:32:28 np0005545273 podman[234528]: 2025-12-04 10:32:28.808520617 +0000 UTC m=+0.495865216 container died 906191cfffc572cb16de4b50d84a490ab9dd6fb18b4527c6beac473a465047e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_saha, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec  4 05:32:28 np0005545273 systemd[1]: var-lib-containers-storage-overlay-27fdf413d1743e46c2fad497e39b7ea707dad65191975241aeeee53cd2ce9e43-merged.mount: Deactivated successfully.
Dec  4 05:32:28 np0005545273 podman[234528]: 2025-12-04 10:32:28.872497687 +0000 UTC m=+0.559842276 container remove 906191cfffc572cb16de4b50d84a490ab9dd6fb18b4527c6beac473a465047e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:32:28 np0005545273 systemd[1]: libpod-conmon-906191cfffc572cb16de4b50d84a490ab9dd6fb18b4527c6beac473a465047e0.scope: Deactivated successfully.
Dec  4 05:32:28 np0005545273 python3.9[234681]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:32:29 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v649: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:32:29 np0005545273 podman[234834]: 2025-12-04 10:32:29.345318635 +0000 UTC m=+0.026781659 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:32:29 np0005545273 podman[234834]: 2025-12-04 10:32:29.445047419 +0000 UTC m=+0.126510383 container create 7eef62e8e17c88448c10c4008e5590f93fe865bc2a156b64653ca0db2b201846 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  4 05:32:29 np0005545273 systemd[1]: Started libpod-conmon-7eef62e8e17c88448c10c4008e5590f93fe865bc2a156b64653ca0db2b201846.scope.
Dec  4 05:32:29 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:32:29 np0005545273 podman[234834]: 2025-12-04 10:32:29.554002677 +0000 UTC m=+0.235465641 container init 7eef62e8e17c88448c10c4008e5590f93fe865bc2a156b64653ca0db2b201846 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_gagarin, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  4 05:32:29 np0005545273 podman[234834]: 2025-12-04 10:32:29.563415986 +0000 UTC m=+0.244878920 container start 7eef62e8e17c88448c10c4008e5590f93fe865bc2a156b64653ca0db2b201846 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_gagarin, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  4 05:32:29 np0005545273 podman[234834]: 2025-12-04 10:32:29.56771741 +0000 UTC m=+0.249180374 container attach 7eef62e8e17c88448c10c4008e5590f93fe865bc2a156b64653ca0db2b201846 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_gagarin, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Dec  4 05:32:29 np0005545273 eager_gagarin[234873]: 167 167
Dec  4 05:32:29 np0005545273 systemd[1]: libpod-7eef62e8e17c88448c10c4008e5590f93fe865bc2a156b64653ca0db2b201846.scope: Deactivated successfully.
Dec  4 05:32:29 np0005545273 podman[234834]: 2025-12-04 10:32:29.57106956 +0000 UTC m=+0.252532554 container died 7eef62e8e17c88448c10c4008e5590f93fe865bc2a156b64653ca0db2b201846 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_gagarin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:32:29 np0005545273 python3.9[234940]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  4 05:32:29 np0005545273 systemd[1]: var-lib-containers-storage-overlay-ce9e78cef61cb8abdc5bbcbba6f90b3e9f16fc2adc3ad7a78d822a378ae7c759-merged.mount: Deactivated successfully.
Dec  4 05:32:30 np0005545273 podman[234834]: 2025-12-04 10:32:30.034361557 +0000 UTC m=+0.715824521 container remove 7eef62e8e17c88448c10c4008e5590f93fe865bc2a156b64653ca0db2b201846 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_gagarin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:32:30 np0005545273 systemd[1]: libpod-conmon-7eef62e8e17c88448c10c4008e5590f93fe865bc2a156b64653ca0db2b201846.scope: Deactivated successfully.
Dec  4 05:32:30 np0005545273 podman[235026]: 2025-12-04 10:32:30.201829283 +0000 UTC m=+0.043212778 container create d23de91045872c81c2e079fa58c95c38c54ac80b1543ed243b0083c19f6567bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_yalow, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:32:30 np0005545273 systemd[1]: Started libpod-conmon-d23de91045872c81c2e079fa58c95c38c54ac80b1543ed243b0083c19f6567bd.scope.
Dec  4 05:32:30 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:32:30 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbda475ad6e243a6f6a8e6e6ca93daa8a35fc99cd3083aca9797a3654f361500/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:32:30 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbda475ad6e243a6f6a8e6e6ca93daa8a35fc99cd3083aca9797a3654f361500/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:32:30 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbda475ad6e243a6f6a8e6e6ca93daa8a35fc99cd3083aca9797a3654f361500/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:32:30 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbda475ad6e243a6f6a8e6e6ca93daa8a35fc99cd3083aca9797a3654f361500/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:32:30 np0005545273 podman[235026]: 2025-12-04 10:32:30.18317674 +0000 UTC m=+0.024560255 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:32:30 np0005545273 podman[235026]: 2025-12-04 10:32:30.285691832 +0000 UTC m=+0.127075337 container init d23de91045872c81c2e079fa58c95c38c54ac80b1543ed243b0083c19f6567bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_yalow, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  4 05:32:30 np0005545273 podman[235026]: 2025-12-04 10:32:30.295176932 +0000 UTC m=+0.136560427 container start d23de91045872c81c2e079fa58c95c38c54ac80b1543ed243b0083c19f6567bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_yalow, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:32:30 np0005545273 podman[235026]: 2025-12-04 10:32:30.29962678 +0000 UTC m=+0.141010305 container attach d23de91045872c81c2e079fa58c95c38c54ac80b1543ed243b0083c19f6567bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:32:30 np0005545273 python3.9[235121]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  4 05:32:30 np0005545273 systemd[1]: Reloading.
Dec  4 05:32:30 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:32:30 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:32:31 np0005545273 wizardly_yalow[235064]: {}
Dec  4 05:32:31 np0005545273 lvm[235231]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:32:31 np0005545273 lvm[235235]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:32:31 np0005545273 lvm[235234]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:32:31 np0005545273 lvm[235231]: VG ceph_vg0 finished
Dec  4 05:32:31 np0005545273 lvm[235234]: VG ceph_vg1 finished
Dec  4 05:32:31 np0005545273 lvm[235235]: VG ceph_vg2 finished
Dec  4 05:32:31 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v650: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:32:31 np0005545273 systemd[1]: libpod-d23de91045872c81c2e079fa58c95c38c54ac80b1543ed243b0083c19f6567bd.scope: Deactivated successfully.
Dec  4 05:32:31 np0005545273 systemd[1]: libpod-d23de91045872c81c2e079fa58c95c38c54ac80b1543ed243b0083c19f6567bd.scope: Consumed 1.402s CPU time.
Dec  4 05:32:31 np0005545273 conmon[235064]: conmon d23de91045872c81c2e0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d23de91045872c81c2e079fa58c95c38c54ac80b1543ed243b0083c19f6567bd.scope/container/memory.events
Dec  4 05:32:31 np0005545273 podman[235026]: 2025-12-04 10:32:31.123323833 +0000 UTC m=+0.964707348 container died d23de91045872c81c2e079fa58c95c38c54ac80b1543ed243b0083c19f6567bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_yalow, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:32:31 np0005545273 systemd[1]: var-lib-containers-storage-overlay-fbda475ad6e243a6f6a8e6e6ca93daa8a35fc99cd3083aca9797a3654f361500-merged.mount: Deactivated successfully.
Dec  4 05:32:31 np0005545273 podman[235026]: 2025-12-04 10:32:31.166470907 +0000 UTC m=+1.007854392 container remove d23de91045872c81c2e079fa58c95c38c54ac80b1543ed243b0083c19f6567bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_yalow, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:32:31 np0005545273 systemd[1]: libpod-conmon-d23de91045872c81c2e079fa58c95c38c54ac80b1543ed243b0083c19f6567bd.scope: Deactivated successfully.
Dec  4 05:32:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:32:31 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:32:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:32:31 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:32:31 np0005545273 python3.9[235423]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:32:32 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:32:32 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:32:32 np0005545273 python3.9[235576]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:32:33 np0005545273 python3.9[235729]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:32:33 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v651: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:32:33 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:32:33 np0005545273 python3.9[235882]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:32:34 np0005545273 python3.9[236035]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:32:34 np0005545273 python3.9[236188]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:32:35 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v652: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:32:35 np0005545273 python3.9[236341]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:32:36 np0005545273 python3.9[236494]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 05:32:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:32:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:32:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:32:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:32:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:32:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:32:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:32:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:32:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:32:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:32:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:32:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:32:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec  4 05:32:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:32:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:32:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:32:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:32:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:32:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 05:32:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:32:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:32:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:32:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 05:32:37 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v653: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:32:37 np0005545273 python3.9[236647]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:32:38 np0005545273 python3.9[236799]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:32:38 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:32:38 np0005545273 python3.9[236951]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:32:39 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v654: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:32:39 np0005545273 python3.9[237105]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:32:40 np0005545273 python3.9[237257]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:32:40 np0005545273 python3.9[237409]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:32:41 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v655: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:32:41 np0005545273 python3.9[237561]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:32:42 np0005545273 python3.9[237713]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:32:43 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v656: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:32:43 np0005545273 python3.9[237865]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:32:43 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:32:44 np0005545273 python3.9[238017]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:32:45 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v657: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:32:47 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v658: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:32:47 np0005545273 podman[238042]: 2025-12-04 10:32:47.962917416 +0000 UTC m=+0.061003758 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec  4 05:32:48 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:32:49 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v659: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:32:49 np0005545273 python3.9[238190]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Dec  4 05:32:50 np0005545273 python3.9[238343]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  4 05:32:50 np0005545273 python3.9[238501]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  4 05:32:51 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v660: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:32:51 np0005545273 systemd-logind[798]: New session 51 of user zuul.
Dec  4 05:32:51 np0005545273 systemd[1]: Started Session 51 of User zuul.
Dec  4 05:32:52 np0005545273 systemd[1]: session-51.scope: Deactivated successfully.
Dec  4 05:32:52 np0005545273 systemd-logind[798]: Session 51 logged out. Waiting for processes to exit.
Dec  4 05:32:52 np0005545273 systemd-logind[798]: Removed session 51.
Dec  4 05:32:52 np0005545273 python3.9[238687]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:32:53 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v661: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:32:53 np0005545273 python3.9[238808]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764844372.2623973-1249-186420345472586/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:32:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:32:53 np0005545273 python3.9[238958]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:32:54 np0005545273 python3.9[239034]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:32:54 np0005545273 python3.9[239184]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:32:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:32:54.898 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:32:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:32:54.899 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:32:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:32:54.899 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:32:55 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v662: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:32:55 np0005545273 python3.9[239305]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764844374.3197238-1249-233661201797640/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:32:55 np0005545273 python3.9[239455]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:32:56 np0005545273 podman[239550]: 2025-12-04 10:32:56.32718519 +0000 UTC m=+0.104329767 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  4 05:32:56 np0005545273 python3.9[239591]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764844375.397189-1249-109819128706089/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:32:56 np0005545273 python3.9[239751]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:32:57 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v663: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:32:57 np0005545273 python3.9[239872]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764844376.5887334-1249-131569728837859/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:32:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:32:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:32:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:32:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:32:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:32:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:32:57 np0005545273 podman[239996]: 2025-12-04 10:32:57.942299074 +0000 UTC m=+0.051402196 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  4 05:32:58 np0005545273 python3.9[240035]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:32:58 np0005545273 python3.9[240162]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764844377.6326604-1249-98288261275346/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:32:58 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:32:59 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v664: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:32:59 np0005545273 python3.9[240314]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:33:00 np0005545273 python3.9[240466]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:33:00 np0005545273 python3.9[240618]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:33:01 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v665: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:33:01 np0005545273 python3.9[240770]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:33:01 np0005545273 python3.9[240893]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764844380.891467-1356-53973366858985/.source _original_basename=.i55ys04b follow=False checksum=ab11a89d1b6d7fe91e220a46fbf2bb5f52f68c89 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Dec  4 05:33:02 np0005545273 python3.9[241046]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:33:03 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v666: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:33:03 np0005545273 python3.9[241199]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:33:03 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:33:03 np0005545273 python3.9[241320]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764844382.9344735-1382-37164165681981/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:33:04 np0005545273 python3.9[241470]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 05:33:05 np0005545273 python3.9[241591]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764844384.067138-1397-44641461593624/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  4 05:33:05 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v667: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:33:05 np0005545273 python3.9[241744]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Dec  4 05:33:06 np0005545273 python3.9[241897]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  4 05:33:07 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v668: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:33:07 np0005545273 python3[242049]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Dec  4 05:33:08 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:33:09 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v669: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:33:11 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v670: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:33:13 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v671: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:33:13 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:33:15 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v672: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:33:17 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v673: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:33:17 np0005545273 podman[242064]: 2025-12-04 10:33:17.757615696 +0000 UTC m=+10.411336577 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  4 05:33:17 np0005545273 podman[242174]: 2025-12-04 10:33:17.959135905 +0000 UTC m=+0.072844685 container create f24066bf8964aa9ce403c773a8f3d64a68d711144b4f0ac96b8f71e946f50eaa (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=nova_compute_init, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Dec  4 05:33:17 np0005545273 podman[242174]: 2025-12-04 10:33:17.922133539 +0000 UTC m=+0.035842399 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  4 05:33:17 np0005545273 python3[242049]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Dec  4 05:33:18 np0005545273 podman[242335]: 2025-12-04 10:33:18.579508536 +0000 UTC m=+0.066599294 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  4 05:33:18 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:33:18 np0005545273 python3.9[242377]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:33:19 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v674: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:33:19 np0005545273 python3.9[242535]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Dec  4 05:33:20 np0005545273 python3.9[242687]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  4 05:33:21 np0005545273 python3[242839]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec  4 05:33:21 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v675: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:33:21 np0005545273 podman[242877]: 2025-12-04 10:33:21.240421311 +0000 UTC m=+0.024738461 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  4 05:33:21 np0005545273 podman[242877]: 2025-12-04 10:33:21.359073554 +0000 UTC m=+0.143390674 container create f539452210e448e722addf685ec65e70f778be7e1a1d234b6a11ec17e45a2bc8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm, container_name=nova_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Dec  4 05:33:21 np0005545273 python3[242839]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Dec  4 05:33:22 np0005545273 python3.9[243067]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:33:22 np0005545273 python3.9[243221]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:33:23 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v676: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:33:23 np0005545273 python3.9[243372]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764844402.8842034-1489-222912326697785/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 05:33:23 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:33:23 np0005545273 python3.9[243448]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  4 05:33:23 np0005545273 systemd[1]: Reloading.
Dec  4 05:33:24 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:33:24 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:33:24 np0005545273 python3.9[243559]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 05:33:24 np0005545273 systemd[1]: Reloading.
Dec  4 05:33:25 np0005545273 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 05:33:25 np0005545273 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 05:33:25 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v677: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:33:25 np0005545273 systemd[1]: Starting nova_compute container...
Dec  4 05:33:25 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:33:25 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395bad5d3aa2240934f3685ab20acb850209d80fe1675018fbfac2968cec8a7f/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec  4 05:33:25 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395bad5d3aa2240934f3685ab20acb850209d80fe1675018fbfac2968cec8a7f/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  4 05:33:25 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395bad5d3aa2240934f3685ab20acb850209d80fe1675018fbfac2968cec8a7f/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  4 05:33:25 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395bad5d3aa2240934f3685ab20acb850209d80fe1675018fbfac2968cec8a7f/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  4 05:33:25 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395bad5d3aa2240934f3685ab20acb850209d80fe1675018fbfac2968cec8a7f/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec  4 05:33:25 np0005545273 podman[243598]: 2025-12-04 10:33:25.428284746 +0000 UTC m=+0.096649951 container init f539452210e448e722addf685ec65e70f778be7e1a1d234b6a11ec17e45a2bc8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:33:25 np0005545273 podman[243598]: 2025-12-04 10:33:25.434825395 +0000 UTC m=+0.103190560 container start f539452210e448e722addf685ec65e70f778be7e1a1d234b6a11ec17e45a2bc8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, io.buildah.version=1.41.3, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:33:25 np0005545273 podman[243598]: nova_compute
Dec  4 05:33:25 np0005545273 nova_compute[243612]: + sudo -E kolla_set_configs
Dec  4 05:33:25 np0005545273 systemd[1]: Started nova_compute container.
Dec  4 05:33:25 np0005545273 nova_compute[243612]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  4 05:33:25 np0005545273 nova_compute[243612]: INFO:__main__:Validating config file
Dec  4 05:33:25 np0005545273 nova_compute[243612]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  4 05:33:25 np0005545273 nova_compute[243612]: INFO:__main__:Copying service configuration files
Dec  4 05:33:25 np0005545273 nova_compute[243612]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec  4 05:33:25 np0005545273 nova_compute[243612]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec  4 05:33:25 np0005545273 nova_compute[243612]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec  4 05:33:25 np0005545273 nova_compute[243612]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec  4 05:33:25 np0005545273 nova_compute[243612]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec  4 05:33:25 np0005545273 nova_compute[243612]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  4 05:33:25 np0005545273 nova_compute[243612]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  4 05:33:25 np0005545273 nova_compute[243612]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  4 05:33:25 np0005545273 nova_compute[243612]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  4 05:33:25 np0005545273 nova_compute[243612]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec  4 05:33:25 np0005545273 nova_compute[243612]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec  4 05:33:25 np0005545273 nova_compute[243612]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  4 05:33:25 np0005545273 nova_compute[243612]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  4 05:33:25 np0005545273 nova_compute[243612]: INFO:__main__:Deleting /etc/ceph
Dec  4 05:33:25 np0005545273 nova_compute[243612]: INFO:__main__:Creating directory /etc/ceph
Dec  4 05:33:25 np0005545273 nova_compute[243612]: INFO:__main__:Setting permission for /etc/ceph
Dec  4 05:33:25 np0005545273 nova_compute[243612]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec  4 05:33:25 np0005545273 nova_compute[243612]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  4 05:33:25 np0005545273 nova_compute[243612]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec  4 05:33:25 np0005545273 nova_compute[243612]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  4 05:33:25 np0005545273 nova_compute[243612]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec  4 05:33:25 np0005545273 nova_compute[243612]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  4 05:33:25 np0005545273 nova_compute[243612]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec  4 05:33:25 np0005545273 nova_compute[243612]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  4 05:33:25 np0005545273 nova_compute[243612]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec  4 05:33:25 np0005545273 nova_compute[243612]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec  4 05:33:25 np0005545273 nova_compute[243612]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec  4 05:33:25 np0005545273 nova_compute[243612]: INFO:__main__:Writing out command to execute
Dec  4 05:33:25 np0005545273 nova_compute[243612]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  4 05:33:25 np0005545273 nova_compute[243612]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  4 05:33:25 np0005545273 nova_compute[243612]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec  4 05:33:25 np0005545273 nova_compute[243612]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  4 05:33:25 np0005545273 nova_compute[243612]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  4 05:33:25 np0005545273 nova_compute[243612]: ++ cat /run_command
Dec  4 05:33:25 np0005545273 nova_compute[243612]: + CMD=nova-compute
Dec  4 05:33:25 np0005545273 nova_compute[243612]: + ARGS=
Dec  4 05:33:25 np0005545273 nova_compute[243612]: + sudo kolla_copy_cacerts
Dec  4 05:33:25 np0005545273 nova_compute[243612]: + [[ ! -n '' ]]
Dec  4 05:33:25 np0005545273 nova_compute[243612]: + . kolla_extend_start
Dec  4 05:33:25 np0005545273 nova_compute[243612]: + echo 'Running command: '\''nova-compute'\'''
Dec  4 05:33:25 np0005545273 nova_compute[243612]: Running command: 'nova-compute'
Dec  4 05:33:25 np0005545273 nova_compute[243612]: + umask 0022
Dec  4 05:33:25 np0005545273 nova_compute[243612]: + exec nova-compute
Dec  4 05:33:26 np0005545273 python3.9[243773]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:33:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:33:26
Dec  4 05:33:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:33:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:33:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'images', 'default.rgw.meta', 'backups', 'default.rgw.log', '.rgw.root', 'volumes', '.mgr']
Dec  4 05:33:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:33:26 np0005545273 podman[243898]: 2025-12-04 10:33:26.964386557 +0000 UTC m=+0.084223070 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  4 05:33:27 np0005545273 python3.9[243941]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:33:27 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v678: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:33:27 np0005545273 python3.9[244101]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 05:33:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:33:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:33:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:33:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:33:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:33:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:33:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:33:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:33:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:33:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:33:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:33:28 np0005545273 nova_compute[243612]: 2025-12-04 10:33:28.026 243616 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  4 05:33:28 np0005545273 nova_compute[243612]: 2025-12-04 10:33:28.026 243616 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  4 05:33:28 np0005545273 nova_compute[243612]: 2025-12-04 10:33:28.026 243616 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  4 05:33:28 np0005545273 nova_compute[243612]: 2025-12-04 10:33:28.027 243616 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Dec  4 05:33:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:33:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:33:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:33:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:33:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:33:28 np0005545273 nova_compute[243612]: 2025-12-04 10:33:28.240 243616 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:33:28 np0005545273 nova_compute[243612]: 2025-12-04 10:33:28.260 243616 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:33:28 np0005545273 nova_compute[243612]: 2025-12-04 10:33:28.261 243616 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec  4 05:33:28 np0005545273 podman[244229]: 2025-12-04 10:33:28.589597007 +0000 UTC m=+0.061733996 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  4 05:33:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:33:28 np0005545273 python3.9[244276]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec  4 05:33:28 np0005545273 nova_compute[243612]: 2025-12-04 10:33:28.863 243616 INFO nova.virt.driver [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Dec  4 05:33:28 np0005545273 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.001 243616 INFO nova.compute.provider_config [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.018 243616 DEBUG oslo_concurrency.lockutils [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.018 243616 DEBUG oslo_concurrency.lockutils [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.019 243616 DEBUG oslo_concurrency.lockutils [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.019 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.019 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.019 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.019 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.020 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.020 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.020 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.020 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.020 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.020 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.020 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.021 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.021 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.021 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.021 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.021 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.021 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.021 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.022 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.022 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.022 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.022 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.022 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.022 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.022 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.023 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.023 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.023 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.023 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.023 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.023 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.024 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.024 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.024 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.024 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.024 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.024 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.025 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.025 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.025 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.025 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.025 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.026 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.026 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.026 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.026 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.026 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.027 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.027 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.028 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.028 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.028 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.028 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.028 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.029 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.029 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.029 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.029 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.029 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.029 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.029 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.030 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.030 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.030 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.030 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.030 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.030 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.031 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.031 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.031 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.031 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.031 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.031 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.032 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.032 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.032 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.032 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.032 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.033 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.033 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.033 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.033 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.033 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.034 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.034 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.034 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.034 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.034 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.034 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.034 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.035 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.035 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.035 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.035 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.035 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.035 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.035 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.036 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.036 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.036 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.036 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.036 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.036 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.037 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.037 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.037 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.037 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.037 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.038 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.038 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.038 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.038 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.038 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.038 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.039 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.039 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.039 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.039 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.039 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.040 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.040 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.040 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.040 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.040 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.041 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.041 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.041 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.041 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.041 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.042 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.042 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.042 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.042 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.042 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.042 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.043 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.043 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.043 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.043 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.043 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.044 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.044 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.044 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.044 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.044 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.044 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.044 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.045 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.045 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.045 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.045 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.045 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.045 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.045 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.046 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.046 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.046 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.046 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.046 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.046 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.047 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.047 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.047 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.047 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.047 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.048 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.048 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.048 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.048 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.048 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.048 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.049 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.049 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.049 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.049 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.049 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.049 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.050 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.050 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.050 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.050 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.050 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.050 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.050 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.051 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.051 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.051 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.051 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.051 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.051 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.052 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.052 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.052 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.052 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.052 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.052 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.053 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.053 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.053 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.053 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.053 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.053 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.054 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.054 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.054 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.054 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.054 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.054 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.054 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.055 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.055 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.055 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.055 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.055 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.055 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.055 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.056 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.056 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.056 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.056 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.056 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.056 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.057 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.057 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.057 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.057 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.057 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.058 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.058 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.058 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.058 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.058 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.058 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.058 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.059 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.059 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.059 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.059 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.059 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.059 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.060 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.060 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.060 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.060 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.060 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.060 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.061 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.061 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.061 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.061 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.061 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.061 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.061 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.062 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.062 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.062 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.062 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.062 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.062 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.062 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.063 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.063 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.063 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.063 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.063 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.064 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.064 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.064 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.064 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.064 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.064 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.065 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.065 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.065 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.065 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.065 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.065 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.065 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.066 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.066 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.066 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.066 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.066 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.066 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.066 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.067 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.067 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.067 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.067 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.067 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.067 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.067 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.068 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.068 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.068 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.068 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.068 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.068 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.068 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.068 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.069 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.069 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.069 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.069 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.069 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.070 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.070 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.070 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.070 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.070 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.070 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.071 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.071 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.071 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.071 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.071 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.071 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.072 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.072 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.072 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.072 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.072 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.072 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.072 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.073 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.073 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.073 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.073 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.073 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.073 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.074 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.074 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.074 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.074 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.074 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.075 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.075 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.075 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.075 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.075 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.076 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.076 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.076 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.076 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.076 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.077 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.077 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.077 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.077 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.077 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.078 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.078 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.078 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.078 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.078 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.079 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.079 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.079 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.080 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.080 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.080 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.080 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.080 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.081 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.081 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.081 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.081 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.081 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.082 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.082 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.082 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.082 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.082 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.082 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.083 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.083 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.083 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.083 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.083 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.083 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.084 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.084 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.084 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.084 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.084 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.085 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.085 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.085 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.085 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.085 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.085 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.086 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.086 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.086 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.086 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.086 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.086 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.086 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.087 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.087 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.087 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.087 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.087 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.087 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.088 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.088 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.088 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.088 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.088 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.088 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.088 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.089 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.089 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.089 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.089 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.089 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.089 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.090 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.090 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.090 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.090 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.090 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.090 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.090 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.091 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.091 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.091 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.091 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.091 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.091 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.091 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.092 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.092 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.092 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.092 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.092 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.092 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.093 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.093 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.093 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.093 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.093 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.093 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.094 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.094 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.094 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.094 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.095 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.095 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.095 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.095 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.095 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.095 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.096 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.096 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.096 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.096 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.096 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.096 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.096 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.097 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.097 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.097 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.097 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.098 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.098 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.098 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.098 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.099 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.099 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.099 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.099 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.099 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.100 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.100 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.100 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.100 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.100 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.101 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.101 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.101 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.101 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.101 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.102 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.102 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.102 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.102 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.102 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.102 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.103 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.103 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.103 243616 WARNING oslo_config.cfg [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec  4 05:33:29 np0005545273 nova_compute[243612]: live_migration_uri is deprecated for removal in favor of two other options that
Dec  4 05:33:29 np0005545273 nova_compute[243612]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec  4 05:33:29 np0005545273 nova_compute[243612]: and ``live_migration_inbound_addr`` respectively.
Dec  4 05:33:29 np0005545273 nova_compute[243612]: ).  Its value may be silently ignored in the future.#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.103 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.104 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.104 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.104 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.104 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.104 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.105 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.105 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.105 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.105 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.105 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.106 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.106 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.106 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.107 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.107 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.108 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.108 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.108 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.rbd_secret_uuid        = f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.108 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.109 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.109 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.109 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.109 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.109 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.110 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.110 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.110 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.110 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.111 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.111 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.111 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.111 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.111 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.112 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.112 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.112 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.112 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.112 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.112 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.113 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.113 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.113 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.113 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.113 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.113 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.114 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.114 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.114 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.114 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.114 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.115 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.115 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.115 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.115 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.115 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.115 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.115 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.116 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.116 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.116 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.116 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.116 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.116 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.116 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.117 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.117 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.117 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.117 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.117 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.117 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.118 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.118 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.118 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.118 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.119 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.119 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.119 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.119 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.119 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.119 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.120 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.120 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.120 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.120 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.120 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.120 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.120 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.121 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.121 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.121 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.121 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.121 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.121 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.122 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.122 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.122 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.122 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.122 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.122 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.123 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.123 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.123 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.123 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.123 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.123 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.123 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.124 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.124 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.124 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.124 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.124 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.124 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.125 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.125 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.125 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.125 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.125 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.125 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.125 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.126 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.126 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.126 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.126 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.126 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.126 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.127 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.127 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.127 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.127 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.127 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.127 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.127 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.128 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.128 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.128 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.128 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.128 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.128 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.129 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.129 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.129 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.129 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.130 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.130 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.130 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.130 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.130 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.130 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.131 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.131 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.131 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.131 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.131 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.131 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.132 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.132 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.132 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.132 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.132 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.132 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.133 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.133 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.133 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.133 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.133 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.133 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.133 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.134 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.134 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.134 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.134 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.134 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.134 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.135 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.135 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.135 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.135 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.135 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.136 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.136 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.136 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.136 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.136 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.136 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.136 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.137 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.137 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.137 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.137 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.137 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.137 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.138 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.138 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.138 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.138 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.138 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.138 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.139 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.139 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.139 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.139 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.139 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.139 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.140 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.140 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.140 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.140 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.140 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.140 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v679: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.140 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.141 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.141 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.141 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.141 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.141 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.141 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.141 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.142 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.142 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.142 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.142 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.142 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.142 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.142 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.143 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.143 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.143 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.143 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.143 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.143 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.144 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.144 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.144 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.144 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.144 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.144 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.144 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.145 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.145 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.145 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.145 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.145 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.145 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.145 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.146 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.146 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.146 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.146 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.146 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.147 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.147 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.147 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.147 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.147 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.147 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.148 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.148 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.148 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.148 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.148 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.148 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.149 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.149 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.149 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.149 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.149 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.150 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.150 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.150 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.150 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.150 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.151 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.151 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.151 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.151 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.151 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.151 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.152 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.152 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.152 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.152 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.152 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.152 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.153 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.153 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.153 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.153 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.153 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.153 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.154 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.154 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.154 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.154 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.154 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.154 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.155 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.155 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.155 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.155 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.156 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.156 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.156 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.156 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.156 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.157 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.157 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.157 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.157 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.157 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.157 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.158 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.158 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.158 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.158 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.158 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.158 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.159 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.159 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.159 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.159 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.159 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.159 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.159 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.160 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.160 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.160 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.160 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.160 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.160 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.161 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.161 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.161 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.161 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.161 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.161 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.161 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.162 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.162 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.162 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.162 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.162 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.162 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.163 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.163 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.163 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.163 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.163 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.163 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.163 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.164 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.164 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.164 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.164 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.164 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.165 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.165 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.165 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.165 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.165 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.165 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.166 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.166 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.166 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.166 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.166 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.167 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.167 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.167 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.167 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.167 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.168 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.168 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.168 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.168 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.168 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.168 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.168 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.169 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.169 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.169 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.169 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.169 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.169 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.170 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.170 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.170 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.170 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.170 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.170 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.170 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.171 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.171 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.171 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.171 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.171 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.172 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.172 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.172 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.172 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.172 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.172 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.172 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.173 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.173 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.173 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.173 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.173 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.174 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.174 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.174 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.174 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.174 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.175 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.175 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.175 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.175 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.175 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.175 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.176 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.176 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.176 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.176 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.176 243616 DEBUG oslo_service.service [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.178 243616 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.195 243616 DEBUG nova.virt.libvirt.host [None req-e38bbf9b-0def-4ef3-b1ff-ff73843144f8 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.197 243616 DEBUG nova.virt.libvirt.host [None req-e38bbf9b-0def-4ef3-b1ff-ff73843144f8 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.197 243616 DEBUG nova.virt.libvirt.host [None req-e38bbf9b-0def-4ef3-b1ff-ff73843144f8 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.197 243616 DEBUG nova.virt.libvirt.host [None req-e38bbf9b-0def-4ef3-b1ff-ff73843144f8 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Dec  4 05:33:29 np0005545273 systemd[1]: Starting libvirt QEMU daemon...
Dec  4 05:33:29 np0005545273 systemd[1]: Started libvirt QEMU daemon.
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.279 243616 DEBUG nova.virt.libvirt.host [None req-e38bbf9b-0def-4ef3-b1ff-ff73843144f8 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f5ce1468fa0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.283 243616 DEBUG nova.virt.libvirt.host [None req-e38bbf9b-0def-4ef3-b1ff-ff73843144f8 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f5ce1468fa0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.284 243616 INFO nova.virt.libvirt.driver [None req-e38bbf9b-0def-4ef3-b1ff-ff73843144f8 - - - - - -] Connection event '1' reason 'None'#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.300 243616 WARNING nova.virt.libvirt.driver [None req-e38bbf9b-0def-4ef3-b1ff-ff73843144f8 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.300 243616 DEBUG nova.virt.libvirt.volume.mount [None req-e38bbf9b-0def-4ef3-b1ff-ff73843144f8 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Dec  4 05:33:29 np0005545273 python3.9[244505]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  4 05:33:29 np0005545273 systemd[1]: Stopping nova_compute container...
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.907 243616 DEBUG oslo_concurrency.lockutils [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.907 243616 DEBUG oslo_concurrency.lockutils [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  4 05:33:29 np0005545273 nova_compute[243612]: 2025-12-04 10:33:29.907 243616 DEBUG oslo_concurrency.lockutils [None req-a3a3548b-aa80-45a6-a8d8-bc0379ffe06a - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  4 05:33:30 np0005545273 virtqemud[244380]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec  4 05:33:30 np0005545273 virtqemud[244380]: hostname: compute-0
Dec  4 05:33:30 np0005545273 virtqemud[244380]: End of file while reading data: Input/output error
Dec  4 05:33:30 np0005545273 systemd[1]: libpod-f539452210e448e722addf685ec65e70f778be7e1a1d234b6a11ec17e45a2bc8.scope: Deactivated successfully.
Dec  4 05:33:30 np0005545273 systemd[1]: libpod-f539452210e448e722addf685ec65e70f778be7e1a1d234b6a11ec17e45a2bc8.scope: Consumed 3.534s CPU time.
Dec  4 05:33:30 np0005545273 podman[244511]: 2025-12-04 10:33:30.673570753 +0000 UTC m=+0.810115955 container died f539452210e448e722addf685ec65e70f778be7e1a1d234b6a11ec17e45a2bc8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=nova_compute, org.label-schema.build-date=20251125, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Dec  4 05:33:30 np0005545273 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f539452210e448e722addf685ec65e70f778be7e1a1d234b6a11ec17e45a2bc8-userdata-shm.mount: Deactivated successfully.
Dec  4 05:33:30 np0005545273 systemd[1]: var-lib-containers-storage-overlay-395bad5d3aa2240934f3685ab20acb850209d80fe1675018fbfac2968cec8a7f-merged.mount: Deactivated successfully.
Dec  4 05:33:31 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v680: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:33:33 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v681: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:33:33 np0005545273 podman[244511]: 2025-12-04 10:33:33.551849742 +0000 UTC m=+3.688394924 container cleanup f539452210e448e722addf685ec65e70f778be7e1a1d234b6a11ec17e45a2bc8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm)
Dec  4 05:33:33 np0005545273 podman[244511]: nova_compute
Dec  4 05:33:33 np0005545273 podman[244616]: nova_compute
Dec  4 05:33:33 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:33:33 np0005545273 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Dec  4 05:33:33 np0005545273 systemd[1]: Stopped nova_compute container.
Dec  4 05:33:33 np0005545273 systemd[1]: Starting nova_compute container...
Dec  4 05:33:33 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:33:33 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395bad5d3aa2240934f3685ab20acb850209d80fe1675018fbfac2968cec8a7f/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec  4 05:33:33 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395bad5d3aa2240934f3685ab20acb850209d80fe1675018fbfac2968cec8a7f/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  4 05:33:33 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395bad5d3aa2240934f3685ab20acb850209d80fe1675018fbfac2968cec8a7f/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  4 05:33:33 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395bad5d3aa2240934f3685ab20acb850209d80fe1675018fbfac2968cec8a7f/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  4 05:33:33 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395bad5d3aa2240934f3685ab20acb850209d80fe1675018fbfac2968cec8a7f/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec  4 05:33:33 np0005545273 podman[244627]: 2025-12-04 10:33:33.729635676 +0000 UTC m=+0.084269561 container init f539452210e448e722addf685ec65e70f778be7e1a1d234b6a11ec17e45a2bc8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, container_name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:33:33 np0005545273 podman[244627]: 2025-12-04 10:33:33.737272411 +0000 UTC m=+0.091906276 container start f539452210e448e722addf685ec65e70f778be7e1a1d234b6a11ec17e45a2bc8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.schema-version=1.0)
Dec  4 05:33:33 np0005545273 podman[244627]: nova_compute
Dec  4 05:33:33 np0005545273 nova_compute[244644]: + sudo -E kolla_set_configs
Dec  4 05:33:33 np0005545273 systemd[1]: Started nova_compute container.
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Validating config file
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Copying service configuration files
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Deleting /etc/ceph
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Creating directory /etc/ceph
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Setting permission for /etc/ceph
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Writing out command to execute
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  4 05:33:33 np0005545273 nova_compute[244644]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  4 05:33:33 np0005545273 nova_compute[244644]: ++ cat /run_command
Dec  4 05:33:33 np0005545273 nova_compute[244644]: + CMD=nova-compute
Dec  4 05:33:33 np0005545273 nova_compute[244644]: + ARGS=
Dec  4 05:33:33 np0005545273 nova_compute[244644]: + sudo kolla_copy_cacerts
Dec  4 05:33:33 np0005545273 nova_compute[244644]: + [[ ! -n '' ]]
Dec  4 05:33:33 np0005545273 nova_compute[244644]: + . kolla_extend_start
Dec  4 05:33:33 np0005545273 nova_compute[244644]: + echo 'Running command: '\''nova-compute'\'''
Dec  4 05:33:33 np0005545273 nova_compute[244644]: Running command: 'nova-compute'
Dec  4 05:33:33 np0005545273 nova_compute[244644]: + umask 0022
Dec  4 05:33:33 np0005545273 nova_compute[244644]: + exec nova-compute
Dec  4 05:33:33 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:33:33 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:33:33 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:33:33 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:33:33 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:33:33 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:33:33 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:33:33 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:33:33 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:33:33 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:33:33 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:33:33 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:33:34 np0005545273 podman[244880]: 2025-12-04 10:33:34.275498782 +0000 UTC m=+0.043565305 container create e53dd8ca0af74943ba5678bd046fba7ab1cde79613a88c8ef03ac7310d14d0d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_cannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  4 05:33:34 np0005545273 systemd[1]: Started libpod-conmon-e53dd8ca0af74943ba5678bd046fba7ab1cde79613a88c8ef03ac7310d14d0d6.scope.
Dec  4 05:33:34 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:33:34 np0005545273 podman[244880]: 2025-12-04 10:33:34.348280525 +0000 UTC m=+0.116347048 container init e53dd8ca0af74943ba5678bd046fba7ab1cde79613a88c8ef03ac7310d14d0d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec  4 05:33:34 np0005545273 podman[244880]: 2025-12-04 10:33:34.255201951 +0000 UTC m=+0.023268484 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:33:34 np0005545273 podman[244880]: 2025-12-04 10:33:34.355380177 +0000 UTC m=+0.123446700 container start e53dd8ca0af74943ba5678bd046fba7ab1cde79613a88c8ef03ac7310d14d0d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:33:34 np0005545273 podman[244880]: 2025-12-04 10:33:34.358943772 +0000 UTC m=+0.127010315 container attach e53dd8ca0af74943ba5678bd046fba7ab1cde79613a88c8ef03ac7310d14d0d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_cannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  4 05:33:34 np0005545273 elegant_cannon[244901]: 167 167
Dec  4 05:33:34 np0005545273 systemd[1]: libpod-e53dd8ca0af74943ba5678bd046fba7ab1cde79613a88c8ef03ac7310d14d0d6.scope: Deactivated successfully.
Dec  4 05:33:34 np0005545273 podman[244880]: 2025-12-04 10:33:34.362086879 +0000 UTC m=+0.130153402 container died e53dd8ca0af74943ba5678bd046fba7ab1cde79613a88c8ef03ac7310d14d0d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_cannon, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Dec  4 05:33:34 np0005545273 systemd[1]: var-lib-containers-storage-overlay-6906da004e5d977156cfa1225defe4eb03b150969a8df1533dcdd49cb3947545-merged.mount: Deactivated successfully.
Dec  4 05:33:34 np0005545273 podman[244880]: 2025-12-04 10:33:34.422023789 +0000 UTC m=+0.190090312 container remove e53dd8ca0af74943ba5678bd046fba7ab1cde79613a88c8ef03ac7310d14d0d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_cannon, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:33:34 np0005545273 systemd[1]: libpod-conmon-e53dd8ca0af74943ba5678bd046fba7ab1cde79613a88c8ef03ac7310d14d0d6.scope: Deactivated successfully.
Dec  4 05:33:34 np0005545273 python3.9[244898]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec  4 05:33:34 np0005545273 podman[244928]: 2025-12-04 10:33:34.553492303 +0000 UTC m=+0.020612440 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:33:34 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:33:34 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:33:34 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:33:34 np0005545273 podman[244928]: 2025-12-04 10:33:34.712217806 +0000 UTC m=+0.179337923 container create e531a509e4b35478c902c72dc331adc87fd8cf9931a5e5836d60e9514b7a5721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_napier, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec  4 05:33:34 np0005545273 systemd[1]: Started libpod-conmon-e531a509e4b35478c902c72dc331adc87fd8cf9931a5e5836d60e9514b7a5721.scope.
Dec  4 05:33:34 np0005545273 systemd[1]: Started libpod-conmon-f24066bf8964aa9ce403c773a8f3d64a68d711144b4f0ac96b8f71e946f50eaa.scope.
Dec  4 05:33:34 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:33:34 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/874e34553cf44dcff322c19be86afc5d7b342ebdb6032b7464f368c373c1c906/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:33:34 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:33:34 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/874e34553cf44dcff322c19be86afc5d7b342ebdb6032b7464f368c373c1c906/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:33:34 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bc933a60d49c08d62a32b55186d2743326b4968e9c21d2beb08d1cb1bb478c3/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Dec  4 05:33:34 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bc933a60d49c08d62a32b55186d2743326b4968e9c21d2beb08d1cb1bb478c3/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  4 05:33:34 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bc933a60d49c08d62a32b55186d2743326b4968e9c21d2beb08d1cb1bb478c3/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec  4 05:33:34 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/874e34553cf44dcff322c19be86afc5d7b342ebdb6032b7464f368c373c1c906/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:33:34 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/874e34553cf44dcff322c19be86afc5d7b342ebdb6032b7464f368c373c1c906/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:33:34 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/874e34553cf44dcff322c19be86afc5d7b342ebdb6032b7464f368c373c1c906/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:33:34 np0005545273 podman[244965]: 2025-12-04 10:33:34.819435162 +0000 UTC m=+0.218141062 container init f24066bf8964aa9ce403c773a8f3d64a68d711144b4f0ac96b8f71e946f50eaa (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  4 05:33:34 np0005545273 podman[244928]: 2025-12-04 10:33:34.825428758 +0000 UTC m=+0.292548895 container init e531a509e4b35478c902c72dc331adc87fd8cf9931a5e5836d60e9514b7a5721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Dec  4 05:33:34 np0005545273 podman[244965]: 2025-12-04 10:33:34.829493926 +0000 UTC m=+0.228199816 container start f24066bf8964aa9ce403c773a8f3d64a68d711144b4f0ac96b8f71e946f50eaa (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec  4 05:33:34 np0005545273 podman[244928]: 2025-12-04 10:33:34.835409129 +0000 UTC m=+0.302529246 container start e531a509e4b35478c902c72dc331adc87fd8cf9931a5e5836d60e9514b7a5721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_napier, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  4 05:33:34 np0005545273 python3.9[244898]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Dec  4 05:33:34 np0005545273 podman[244928]: 2025-12-04 10:33:34.844220442 +0000 UTC m=+0.311340579 container attach e531a509e4b35478c902c72dc331adc87fd8cf9931a5e5836d60e9514b7a5721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_napier, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:33:34 np0005545273 nova_compute_init[244992]: INFO:nova_statedir:Applying nova statedir ownership
Dec  4 05:33:34 np0005545273 nova_compute_init[244992]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Dec  4 05:33:34 np0005545273 nova_compute_init[244992]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Dec  4 05:33:34 np0005545273 nova_compute_init[244992]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Dec  4 05:33:34 np0005545273 nova_compute_init[244992]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Dec  4 05:33:34 np0005545273 nova_compute_init[244992]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Dec  4 05:33:34 np0005545273 nova_compute_init[244992]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Dec  4 05:33:34 np0005545273 nova_compute_init[244992]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Dec  4 05:33:34 np0005545273 nova_compute_init[244992]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Dec  4 05:33:34 np0005545273 nova_compute_init[244992]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Dec  4 05:33:34 np0005545273 nova_compute_init[244992]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Dec  4 05:33:34 np0005545273 nova_compute_init[244992]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Dec  4 05:33:34 np0005545273 nova_compute_init[244992]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Dec  4 05:33:34 np0005545273 nova_compute_init[244992]: INFO:nova_statedir:Nova statedir ownership complete
Dec  4 05:33:34 np0005545273 systemd[1]: libpod-f24066bf8964aa9ce403c773a8f3d64a68d711144b4f0ac96b8f71e946f50eaa.scope: Deactivated successfully.
Dec  4 05:33:34 np0005545273 podman[245004]: 2025-12-04 10:33:34.924705061 +0000 UTC m=+0.024635778 container died f24066bf8964aa9ce403c773a8f3d64a68d711144b4f0ac96b8f71e946f50eaa (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  4 05:33:34 np0005545273 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f24066bf8964aa9ce403c773a8f3d64a68d711144b4f0ac96b8f71e946f50eaa-userdata-shm.mount: Deactivated successfully.
Dec  4 05:33:34 np0005545273 systemd[1]: var-lib-containers-storage-overlay-9bc933a60d49c08d62a32b55186d2743326b4968e9c21d2beb08d1cb1bb478c3-merged.mount: Deactivated successfully.
Dec  4 05:33:34 np0005545273 podman[245004]: 2025-12-04 10:33:34.964874813 +0000 UTC m=+0.064805510 container cleanup f24066bf8964aa9ce403c773a8f3d64a68d711144b4f0ac96b8f71e946f50eaa (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm)
Dec  4 05:33:34 np0005545273 systemd[1]: libpod-conmon-f24066bf8964aa9ce403c773a8f3d64a68d711144b4f0ac96b8f71e946f50eaa.scope: Deactivated successfully.
Dec  4 05:33:35 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v682: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:33:35 np0005545273 awesome_napier[244979]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:33:35 np0005545273 awesome_napier[244979]: --> All data devices are unavailable
Dec  4 05:33:35 np0005545273 systemd[1]: libpod-e531a509e4b35478c902c72dc331adc87fd8cf9931a5e5836d60e9514b7a5721.scope: Deactivated successfully.
Dec  4 05:33:35 np0005545273 podman[244928]: 2025-12-04 10:33:35.34182776 +0000 UTC m=+0.808947887 container died e531a509e4b35478c902c72dc331adc87fd8cf9931a5e5836d60e9514b7a5721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:33:35 np0005545273 podman[244928]: 2025-12-04 10:33:35.386251895 +0000 UTC m=+0.853372012 container remove e531a509e4b35478c902c72dc331adc87fd8cf9931a5e5836d60e9514b7a5721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_napier, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:33:35 np0005545273 systemd[1]: libpod-conmon-e531a509e4b35478c902c72dc331adc87fd8cf9931a5e5836d60e9514b7a5721.scope: Deactivated successfully.
Dec  4 05:33:35 np0005545273 systemd[1]: session-50.scope: Deactivated successfully.
Dec  4 05:33:35 np0005545273 systemd[1]: session-50.scope: Consumed 2min 18.462s CPU time.
Dec  4 05:33:35 np0005545273 systemd-logind[798]: Session 50 logged out. Waiting for processes to exit.
Dec  4 05:33:35 np0005545273 systemd-logind[798]: Removed session 50.
Dec  4 05:33:35 np0005545273 systemd[1]: var-lib-containers-storage-overlay-874e34553cf44dcff322c19be86afc5d7b342ebdb6032b7464f368c373c1c906-merged.mount: Deactivated successfully.
Dec  4 05:33:35 np0005545273 podman[245142]: 2025-12-04 10:33:35.846048808 +0000 UTC m=+0.045164385 container create 80d1ccab020ad0c3fa21edbd60e396dd4b9a393c9ecf5c9baaa6cf742376ef08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_agnesi, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:33:35 np0005545273 systemd[1]: Started libpod-conmon-80d1ccab020ad0c3fa21edbd60e396dd4b9a393c9ecf5c9baaa6cf742376ef08.scope.
Dec  4 05:33:35 np0005545273 podman[245142]: 2025-12-04 10:33:35.825404228 +0000 UTC m=+0.024519825 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:33:35 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:33:35 np0005545273 podman[245142]: 2025-12-04 10:33:35.944800859 +0000 UTC m=+0.143916466 container init 80d1ccab020ad0c3fa21edbd60e396dd4b9a393c9ecf5c9baaa6cf742376ef08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_agnesi, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  4 05:33:35 np0005545273 podman[245142]: 2025-12-04 10:33:35.95269881 +0000 UTC m=+0.151814387 container start 80d1ccab020ad0c3fa21edbd60e396dd4b9a393c9ecf5c9baaa6cf742376ef08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_agnesi, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:33:35 np0005545273 podman[245142]: 2025-12-04 10:33:35.955902608 +0000 UTC m=+0.155018185 container attach 80d1ccab020ad0c3fa21edbd60e396dd4b9a393c9ecf5c9baaa6cf742376ef08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_agnesi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  4 05:33:35 np0005545273 gracious_agnesi[245158]: 167 167
Dec  4 05:33:35 np0005545273 systemd[1]: libpod-80d1ccab020ad0c3fa21edbd60e396dd4b9a393c9ecf5c9baaa6cf742376ef08.scope: Deactivated successfully.
Dec  4 05:33:35 np0005545273 podman[245142]: 2025-12-04 10:33:35.96054268 +0000 UTC m=+0.159658277 container died 80d1ccab020ad0c3fa21edbd60e396dd4b9a393c9ecf5c9baaa6cf742376ef08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_agnesi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:33:35 np0005545273 systemd[1]: var-lib-containers-storage-overlay-f2878082f4ecc3790610b4c791de23920baadb311236623b8e12bc744a6acd77-merged.mount: Deactivated successfully.
Dec  4 05:33:36 np0005545273 podman[245142]: 2025-12-04 10:33:36.012559999 +0000 UTC m=+0.211675576 container remove 80d1ccab020ad0c3fa21edbd60e396dd4b9a393c9ecf5c9baaa6cf742376ef08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_agnesi, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec  4 05:33:36 np0005545273 systemd[1]: libpod-conmon-80d1ccab020ad0c3fa21edbd60e396dd4b9a393c9ecf5c9baaa6cf742376ef08.scope: Deactivated successfully.
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.094 244650 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.095 244650 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.095 244650 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.095 244650 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Dec  4 05:33:36 np0005545273 podman[245183]: 2025-12-04 10:33:36.197721963 +0000 UTC m=+0.057331829 container create 5782a03eb74cb318230725844888e8ac7755bc46863d89d521009d3a6506f654 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_mclean, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:33:36 np0005545273 systemd[1]: Started libpod-conmon-5782a03eb74cb318230725844888e8ac7755bc46863d89d521009d3a6506f654.scope.
Dec  4 05:33:36 np0005545273 podman[245183]: 2025-12-04 10:33:36.16378046 +0000 UTC m=+0.023390316 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:33:36 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:33:36 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4b83c5ff2f74363c9af17e6ccbe568b4339044925157f97e086ad3dd0bc890f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:33:36 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4b83c5ff2f74363c9af17e6ccbe568b4339044925157f97e086ad3dd0bc890f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:33:36 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4b83c5ff2f74363c9af17e6ccbe568b4339044925157f97e086ad3dd0bc890f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:33:36 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4b83c5ff2f74363c9af17e6ccbe568b4339044925157f97e086ad3dd0bc890f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:33:36 np0005545273 podman[245183]: 2025-12-04 10:33:36.30623069 +0000 UTC m=+0.165840546 container init 5782a03eb74cb318230725844888e8ac7755bc46863d89d521009d3a6506f654 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_mclean, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.309 244650 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:33:36 np0005545273 podman[245183]: 2025-12-04 10:33:36.314898199 +0000 UTC m=+0.174508035 container start 5782a03eb74cb318230725844888e8ac7755bc46863d89d521009d3a6506f654 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_mclean, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Dec  4 05:33:36 np0005545273 podman[245183]: 2025-12-04 10:33:36.318781833 +0000 UTC m=+0.178391699 container attach 5782a03eb74cb318230725844888e8ac7755bc46863d89d521009d3a6506f654 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_mclean, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.327 244650 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.328 244650 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]: {
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:    "0": [
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:        {
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            "devices": [
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "/dev/loop3"
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            ],
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            "lv_name": "ceph_lv0",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            "lv_size": "21470642176",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            "name": "ceph_lv0",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            "tags": {
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.cluster_name": "ceph",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.crush_device_class": "",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.encrypted": "0",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.objectstore": "bluestore",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.osd_id": "0",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.type": "block",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.vdo": "0",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.with_tpm": "0"
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            },
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            "type": "block",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            "vg_name": "ceph_vg0"
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:        }
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:    ],
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:    "1": [
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:        {
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            "devices": [
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "/dev/loop4"
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            ],
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            "lv_name": "ceph_lv1",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            "lv_size": "21470642176",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            "name": "ceph_lv1",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            "tags": {
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.cluster_name": "ceph",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.crush_device_class": "",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.encrypted": "0",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.objectstore": "bluestore",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.osd_id": "1",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.type": "block",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.vdo": "0",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.with_tpm": "0"
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            },
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            "type": "block",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            "vg_name": "ceph_vg1"
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:        }
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:    ],
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:    "2": [
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:        {
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            "devices": [
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "/dev/loop5"
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            ],
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            "lv_name": "ceph_lv2",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            "lv_size": "21470642176",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            "name": "ceph_lv2",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            "tags": {
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.cluster_name": "ceph",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.crush_device_class": "",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.encrypted": "0",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.objectstore": "bluestore",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.osd_id": "2",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.type": "block",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.vdo": "0",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:                "ceph.with_tpm": "0"
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            },
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            "type": "block",
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:            "vg_name": "ceph_vg2"
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:        }
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]:    ]
Dec  4 05:33:36 np0005545273 reverent_mclean[245200]: }
Dec  4 05:33:36 np0005545273 systemd[1]: libpod-5782a03eb74cb318230725844888e8ac7755bc46863d89d521009d3a6506f654.scope: Deactivated successfully.
Dec  4 05:33:36 np0005545273 podman[245183]: 2025-12-04 10:33:36.666770859 +0000 UTC m=+0.526380705 container died 5782a03eb74cb318230725844888e8ac7755bc46863d89d521009d3a6506f654 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_mclean, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:33:36 np0005545273 systemd[1]: var-lib-containers-storage-overlay-c4b83c5ff2f74363c9af17e6ccbe568b4339044925157f97e086ad3dd0bc890f-merged.mount: Deactivated successfully.
Dec  4 05:33:36 np0005545273 podman[245183]: 2025-12-04 10:33:36.773677617 +0000 UTC m=+0.633287493 container remove 5782a03eb74cb318230725844888e8ac7755bc46863d89d521009d3a6506f654 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_mclean, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.794 244650 INFO nova.virt.driver [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Dec  4 05:33:36 np0005545273 systemd[1]: libpod-conmon-5782a03eb74cb318230725844888e8ac7755bc46863d89d521009d3a6506f654.scope: Deactivated successfully.
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.925 244650 INFO nova.compute.provider_config [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.942 244650 DEBUG oslo_concurrency.lockutils [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.942 244650 DEBUG oslo_concurrency.lockutils [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.942 244650 DEBUG oslo_concurrency.lockutils [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.943 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.943 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.943 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.943 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.943 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.943 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.943 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.944 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.944 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.944 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.944 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.944 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.945 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.945 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.945 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.945 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.945 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.945 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.946 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.946 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.946 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.946 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.946 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.946 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.946 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.947 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.947 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.947 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.947 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.947 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.948 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.948 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.948 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.948 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.948 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.948 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.949 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.949 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.949 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.949 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.949 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.950 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.950 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.950 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.951 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.951 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.951 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.951 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.951 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.952 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.952 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.952 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.952 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.952 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.952 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.952 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.953 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.953 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.953 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.953 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.953 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.953 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.953 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.954 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.954 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.954 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.954 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.954 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.954 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.954 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.955 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.955 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.955 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.955 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.955 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.955 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.956 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.956 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.956 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.956 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.956 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.956 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.957 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.957 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.957 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.957 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.957 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.957 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.958 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.958 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.958 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.958 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.958 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.958 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.959 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.959 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.959 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.959 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.959 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.959 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.959 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.960 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.960 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.960 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.960 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.960 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.960 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.960 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.961 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.961 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.961 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.961 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.961 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.961 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.961 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.962 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.962 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.962 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.962 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.962 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.962 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.962 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.963 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.963 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.963 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.963 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.963 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.963 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.963 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.964 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.964 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.964 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.964 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.964 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.964 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.965 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.965 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.965 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.965 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.965 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.965 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.966 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.966 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.966 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.966 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.966 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.966 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.967 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.967 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.967 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.967 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.967 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.967 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.967 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.968 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.968 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.968 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.968 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.968 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.969 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.969 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.969 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.969 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.969 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.969 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.970 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.970 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.970 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.970 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.970 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.970 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.971 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.971 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.971 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.971 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.971 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.971 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.972 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.972 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.972 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.972 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.972 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.972 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.972 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.973 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.973 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.973 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.973 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.973 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.974 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.974 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.974 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.974 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.974 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.975 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.975 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.975 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.975 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.975 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.975 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.975 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.976 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.976 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.976 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.976 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.976 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.977 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.977 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.977 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.977 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.977 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.978 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.978 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.978 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.978 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.978 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.978 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.978 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.979 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.979 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.979 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.979 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.979 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.980 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.980 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.980 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.980 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.980 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.981 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.981 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.981 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.981 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.981 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.982 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.982 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.982 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.982 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.982 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.982 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.982 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.983 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.983 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.983 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.983 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.983 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.983 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.983 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.984 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.984 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.984 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.984 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.984 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.984 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.984 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.985 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.985 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.985 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.985 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.985 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.985 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.986 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.986 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.986 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.986 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.986 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.986 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.986 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.987 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.987 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.987 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.987 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.987 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.987 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.988 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.988 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.988 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.988 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.988 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.988 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.988 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.989 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.989 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.989 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.989 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.989 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.989 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.990 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.990 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.990 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.990 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.990 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.990 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.990 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.991 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.991 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.991 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.991 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.991 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.991 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.992 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.992 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.992 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.992 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.992 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.992 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.992 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.993 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.993 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.993 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.993 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.993 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.993 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.994 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.994 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.994 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.994 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.994 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.994 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.995 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.995 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.995 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.995 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.995 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.995 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.995 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.996 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.996 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.996 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.996 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.996 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.996 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.997 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.997 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.997 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.997 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.997 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.997 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.998 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.998 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.998 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.998 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.998 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.998 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.998 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.999 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.999 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.999 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.999 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:36 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.999 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:36.999 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.000 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.000 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.000 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.000 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.000 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.000 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.001 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.001 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.001 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.001 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.001 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.001 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.002 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.002 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.002 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.002 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.002 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.002 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.003 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.003 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.003 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.003 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.003 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.003 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.004 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.004 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.004 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.004 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.004 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.004 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.005 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.005 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.005 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.005 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.005 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.006 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.006 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.006 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.006 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.006 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.006 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.007 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.007 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.007 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.007 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.007 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.007 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.007 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.008 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.008 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.008 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.008 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.008 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.008 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.008 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.009 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.009 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.009 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.009 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.009 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.009 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.010 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.010 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.010 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.010 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.010 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.010 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.011 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.011 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.011 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.011 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.011 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.011 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.012 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.012 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.012 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.012 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.012 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.012 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.012 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.013 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.013 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.013 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.013 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.013 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.014 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.014 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.014 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.014 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.014 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.014 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.014 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.015 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.015 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.015 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.015 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.015 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.015 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.015 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.016 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.016 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.016 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.016 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.016 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.016 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.017 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.017 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.017 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.017 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.017 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.017 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.018 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.018 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.018 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.018 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.018 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.018 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.019 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.019 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.019 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.019 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.019 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.019 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.020 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.020 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.020 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.020 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.020 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.020 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.021 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.021 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.021 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.021 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.021 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.022 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.022 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.022 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.022 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.022 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.023 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.023 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.023 244650 WARNING oslo_config.cfg [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec  4 05:33:37 np0005545273 nova_compute[244644]: live_migration_uri is deprecated for removal in favor of two other options that
Dec  4 05:33:37 np0005545273 nova_compute[244644]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec  4 05:33:37 np0005545273 nova_compute[244644]: and ``live_migration_inbound_addr`` respectively.
Dec  4 05:33:37 np0005545273 nova_compute[244644]: ).  Its value may be silently ignored in the future.#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.024 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.024 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.025 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.025 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.025 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.025 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.026 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.026 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.026 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.026 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.026 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.027 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.027 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.027 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.027 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.027 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.028 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.028 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.028 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.rbd_secret_uuid        = f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.028 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.029 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.029 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.029 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.029 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.029 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.030 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.030 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.030 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.030 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.030 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.031 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.031 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.031 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.031 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.032 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.032 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.032 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.032 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.032 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.033 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.033 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.033 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.033 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.033 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.034 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.034 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.034 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.034 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.034 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.035 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.035 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.035 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.035 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.036 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.036 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.036 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.036 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.036 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.037 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.037 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.037 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.037 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.037 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.038 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.038 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.038 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.038 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.039 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.039 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.039 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.039 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.040 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.040 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.040 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.040 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.040 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.041 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.041 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.041 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.041 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.041 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.042 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.042 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.042 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.042 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.042 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.043 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.043 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.043 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.043 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.043 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.044 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.044 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.044 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.044 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.044 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.045 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.045 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.045 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.045 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.045 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.046 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.046 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.046 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.046 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.046 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.047 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.047 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.047 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.047 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.048 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.048 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.048 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.048 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.048 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.049 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.049 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.049 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.049 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.049 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.050 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.050 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.050 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.050 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.050 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.051 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.051 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.051 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.051 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.051 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.052 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.052 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.052 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.052 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.052 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.053 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.053 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.053 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.053 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.054 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.054 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.054 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.054 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.055 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.055 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.055 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.055 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.055 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.056 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.056 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.056 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.056 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.056 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.057 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.057 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.057 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.057 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.058 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.058 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.058 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.058 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.058 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.059 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.059 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.059 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.059 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.059 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.060 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.060 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.060 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.060 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.060 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.061 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.061 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.061 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.061 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.062 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.063 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.063 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.064 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.064 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.064 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.064 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.065 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.065 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.065 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.065 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.065 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.066 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.066 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.066 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.066 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.066 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.067 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.067 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.067 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.067 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.068 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.068 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.068 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.068 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.068 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.068 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.069 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.069 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.069 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.069 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.069 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.069 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.070 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.070 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.070 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.070 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.070 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.070 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.071 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.071 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.071 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.071 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.071 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.071 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.071 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.072 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.072 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.072 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.072 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.072 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.073 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.073 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.073 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.073 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.073 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.073 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.073 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.074 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.074 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.074 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.074 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.074 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.075 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.075 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.075 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.075 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.075 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.075 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.076 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.076 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.076 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.076 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.076 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.077 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.077 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.077 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.077 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.077 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.078 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.078 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.078 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.078 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.078 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.078 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.079 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.079 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.079 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.079 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.079 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.079 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.080 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.080 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.080 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.080 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.080 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.081 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.081 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.081 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.081 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.081 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.082 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.082 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.082 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.082 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.082 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.083 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.083 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.083 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.083 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.083 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.083 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.084 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.084 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.084 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.084 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.084 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.084 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.085 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.085 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.085 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.085 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.085 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.085 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.086 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.086 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.086 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.086 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.086 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.086 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.087 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.087 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.087 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.087 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.087 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.087 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.088 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.088 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.088 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.088 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.088 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.088 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.088 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.089 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.089 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.089 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.089 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.089 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.090 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.090 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.090 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.090 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.090 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.090 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.090 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.091 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.091 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.091 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.091 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.091 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.092 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.092 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.092 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.092 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.092 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.093 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.093 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.093 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.093 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.093 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.093 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.093 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.094 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.094 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.094 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.094 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.094 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.094 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.094 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.095 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.095 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.095 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.095 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.095 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.095 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.095 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.096 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.096 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.096 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.096 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.096 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.096 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.097 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.097 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.097 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.097 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.097 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.097 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.097 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.098 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.098 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.098 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.098 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.098 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.098 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.099 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.099 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.099 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.099 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.099 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.099 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.100 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.100 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.100 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.100 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.100 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.100 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.100 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.101 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.101 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.101 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.101 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.102 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.102 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.102 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.102 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.102 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.102 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.102 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.103 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.103 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.103 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.103 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.103 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.103 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.103 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.104 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.104 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.104 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.104 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.104 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.105 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.105 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.105 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.105 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.105 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.105 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.106 244650 DEBUG oslo_service.service [None req-86d1602f-b3e5-4fec-a4c1-f137886f235f - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.108 244650 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Dec  4 05:33:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:33:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:33:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:33:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:33:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.126 244650 DEBUG nova.virt.libvirt.host [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Dec  4 05:33:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:33:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:33:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:33:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:33:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:33:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:33:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:33:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec  4 05:33:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:33:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:33:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:33:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:33:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:33:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.127 244650 DEBUG nova.virt.libvirt.host [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Dec  4 05:33:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:33:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:33:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:33:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.127 244650 DEBUG nova.virt.libvirt.host [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.127 244650 DEBUG nova.virt.libvirt.host [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Dec  4 05:33:37 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v683: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.145 244650 DEBUG nova.virt.libvirt.host [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f31cd7af250> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.148 244650 DEBUG nova.virt.libvirt.host [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f31cd7af250> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.149 244650 INFO nova.virt.libvirt.driver [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Connection event '1' reason 'None'#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.156 244650 INFO nova.virt.libvirt.host [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Libvirt host capabilities <capabilities>
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <host>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <uuid>1f0bfa2d-c922-4848-973a-776654e5dc59</uuid>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <cpu>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <arch>x86_64</arch>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model>EPYC-Rome-v4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <vendor>AMD</vendor>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <microcode version='16777317'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <signature family='23' model='49' stepping='0'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <maxphysaddr mode='emulate' bits='40'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature name='x2apic'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature name='tsc-deadline'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature name='osxsave'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature name='hypervisor'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature name='tsc_adjust'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature name='spec-ctrl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature name='stibp'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature name='arch-capabilities'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature name='ssbd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature name='cmp_legacy'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature name='topoext'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature name='virt-ssbd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature name='lbrv'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature name='tsc-scale'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature name='vmcb-clean'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature name='pause-filter'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature name='pfthreshold'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature name='svme-addr-chk'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature name='rdctl-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature name='skip-l1dfl-vmentry'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature name='mds-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature name='pschange-mc-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <pages unit='KiB' size='4'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <pages unit='KiB' size='2048'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <pages unit='KiB' size='1048576'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </cpu>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <power_management>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <suspend_mem/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </power_management>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <iommu support='no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <migration_features>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <live/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <uri_transports>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <uri_transport>tcp</uri_transport>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <uri_transport>rdma</uri_transport>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </uri_transports>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </migration_features>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <topology>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <cells num='1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <cell id='0'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:          <memory unit='KiB'>7864320</memory>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:          <pages unit='KiB' size='4'>1966080</pages>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:          <pages unit='KiB' size='2048'>0</pages>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:          <pages unit='KiB' size='1048576'>0</pages>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:          <distances>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:            <sibling id='0' value='10'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:          </distances>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:          <cpus num='8'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:          </cpus>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        </cell>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </cells>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </topology>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <cache>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </cache>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <secmodel>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model>selinux</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <doi>0</doi>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </secmodel>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <secmodel>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model>dac</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <doi>0</doi>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <baselabel type='kvm'>+107:+107</baselabel>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <baselabel type='qemu'>+107:+107</baselabel>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </secmodel>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  </host>
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <guest>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <os_type>hvm</os_type>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <arch name='i686'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <wordsize>32</wordsize>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <domain type='qemu'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <domain type='kvm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </arch>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <features>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <pae/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <nonpae/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <acpi default='on' toggle='yes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <apic default='on' toggle='no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <cpuselection/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <deviceboot/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <disksnapshot default='on' toggle='no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <externalSnapshot/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </features>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  </guest>
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <guest>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <os_type>hvm</os_type>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <arch name='x86_64'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <wordsize>64</wordsize>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <domain type='qemu'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <domain type='kvm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </arch>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <features>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <acpi default='on' toggle='yes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <apic default='on' toggle='no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <cpuselection/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <deviceboot/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <disksnapshot default='on' toggle='no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <externalSnapshot/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </features>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  </guest>
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 
Dec  4 05:33:37 np0005545273 nova_compute[244644]: </capabilities>
Dec  4 05:33:37 np0005545273 nova_compute[244644]: #033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.160 244650 WARNING nova.virt.libvirt.driver [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.161 244650 DEBUG nova.virt.libvirt.volume.mount [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.167 244650 DEBUG nova.virt.libvirt.host [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.192 244650 DEBUG nova.virt.libvirt.host [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec  4 05:33:37 np0005545273 nova_compute[244644]: <domainCapabilities>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <path>/usr/libexec/qemu-kvm</path>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <domain>kvm</domain>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <arch>i686</arch>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <vcpu max='4096'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <iothreads supported='yes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <os supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <enum name='firmware'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <loader supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='type'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>rom</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>pflash</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='readonly'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>yes</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>no</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='secure'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>no</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </loader>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  </os>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <cpu>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <mode name='host-passthrough' supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='hostPassthroughMigratable'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>on</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>off</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </mode>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <mode name='maximum' supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='maximumMigratable'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>on</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>off</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </mode>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <mode name='host-model' supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <vendor>AMD</vendor>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='x2apic'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='tsc-deadline'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='hypervisor'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='tsc_adjust'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='spec-ctrl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='stibp'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='ssbd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='cmp_legacy'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='overflow-recov'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='succor'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='ibrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='amd-ssbd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='virt-ssbd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='lbrv'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='tsc-scale'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='vmcb-clean'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='flushbyasid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='pause-filter'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='pfthreshold'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='svme-addr-chk'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='disable' name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </mode>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <mode name='custom' supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Broadwell'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Broadwell-IBRS'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Broadwell-noTSX'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Broadwell-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Broadwell-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Broadwell-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Broadwell-v4'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cascadelake-Server'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cascadelake-Server-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cascadelake-Server-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cascadelake-Server-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cascadelake-Server-v4'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cascadelake-Server-v5'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cooperlake'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cooperlake-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cooperlake-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Denverton'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='mpx'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Denverton-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='mpx'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Denverton-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Denverton-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Dhyana-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-Genoa'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amd-psfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='auto-ibrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='no-nested-data-bp'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='null-sel-clr-base'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='stibp-always-on'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-Genoa-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amd-psfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='auto-ibrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='no-nested-data-bp'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='null-sel-clr-base'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='stibp-always-on'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-Milan'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-Milan-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-Milan-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amd-psfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='no-nested-data-bp'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='null-sel-clr-base'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='stibp-always-on'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-Rome'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-Rome-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-Rome-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-Rome-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-v4'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='GraniteRapids'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-int8'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-tile'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='bus-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fbsdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrc'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fzrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='mcdt-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pbrsb-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='prefetchiti'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='psdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='sbdr-ssdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='serialize'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='tsx-ldtrk'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='GraniteRapids-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-int8'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-tile'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='bus-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fbsdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrc'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fzrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='mcdt-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pbrsb-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='prefetchiti'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='psdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='sbdr-ssdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='serialize'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='tsx-ldtrk'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='GraniteRapids-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-int8'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-tile'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx10'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx10-128'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx10-256'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx10-512'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='bus-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='cldemote'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fbsdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrc'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fzrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='mcdt-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdir64b'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdiri'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pbrsb-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='prefetchiti'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='psdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='sbdr-ssdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='serialize'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='tsx-ldtrk'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Haswell'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Haswell-IBRS'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Haswell-noTSX'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Haswell-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Haswell-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Haswell-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Haswell-v4'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Icelake-Server'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Icelake-Server-noTSX'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Icelake-Server-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Icelake-Server-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Icelake-Server-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Icelake-Server-v4'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Icelake-Server-v5'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Icelake-Server-v6'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Icelake-Server-v7'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='IvyBridge'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='IvyBridge-IBRS'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='IvyBridge-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='IvyBridge-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='KnightsMill'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-4fmaps'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-4vnniw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512er'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512pf'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='KnightsMill-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-4fmaps'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-4vnniw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512er'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512pf'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Opteron_G4'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fma4'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xop'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Opteron_G4-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fma4'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xop'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Opteron_G5'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fma4'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='tbm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xop'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Opteron_G5-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fma4'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='tbm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xop'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='SapphireRapids'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-int8'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-tile'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='bus-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrc'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fzrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='serialize'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='tsx-ldtrk'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='SapphireRapids-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-int8'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-tile'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='bus-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrc'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fzrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='serialize'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='tsx-ldtrk'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='SapphireRapids-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-int8'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-tile'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='bus-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fbsdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrc'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fzrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='psdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='sbdr-ssdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='serialize'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='tsx-ldtrk'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='SapphireRapids-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-int8'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-tile'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='bus-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='cldemote'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fbsdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrc'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fzrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdir64b'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdiri'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='psdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='sbdr-ssdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='serialize'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='tsx-ldtrk'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='SierraForest'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-ne-convert'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni-int8'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='bus-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='cmpccxadd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fbsdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='mcdt-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pbrsb-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='psdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='sbdr-ssdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='serialize'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='SierraForest-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-ne-convert'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni-int8'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='bus-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='cmpccxadd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fbsdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='mcdt-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pbrsb-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='psdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='sbdr-ssdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='serialize'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Client'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Client-IBRS'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Client-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Client-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Client-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Client-v4'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Server'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Server-IBRS'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Server-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Server-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Server-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Server-v4'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Server-v5'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Snowridge'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='cldemote'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='core-capability'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdir64b'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdiri'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='mpx'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='split-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Snowridge-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='cldemote'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='core-capability'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdir64b'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdiri'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='mpx'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='split-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Snowridge-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='cldemote'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='core-capability'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdir64b'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdiri'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='split-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Snowridge-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='cldemote'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='core-capability'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdir64b'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdiri'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='split-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Snowridge-v4'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='cldemote'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdir64b'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdiri'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='athlon'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='3dnow'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='3dnowext'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='athlon-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='3dnow'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='3dnowext'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='core2duo'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='core2duo-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='coreduo'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='coreduo-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='n270'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='n270-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='phenom'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='3dnow'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='3dnowext'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='phenom-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='3dnow'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='3dnowext'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </mode>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  </cpu>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <memoryBacking supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <enum name='sourceType'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <value>file</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <value>anonymous</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <value>memfd</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  </memoryBacking>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <devices>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <disk supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='diskDevice'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>disk</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>cdrom</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>floppy</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>lun</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='bus'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>fdc</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>scsi</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>virtio</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>usb</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>sata</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='model'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>virtio</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>virtio-transitional</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>virtio-non-transitional</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </disk>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <graphics supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='type'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>vnc</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>egl-headless</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>dbus</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </graphics>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <video supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='modelType'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>vga</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>cirrus</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>virtio</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>none</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>bochs</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>ramfb</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </video>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <hostdev supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='mode'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>subsystem</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='startupPolicy'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>default</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>mandatory</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>requisite</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>optional</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='subsysType'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>usb</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>pci</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>scsi</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='capsType'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='pciBackend'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </hostdev>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <rng supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='model'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>virtio</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>virtio-transitional</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>virtio-non-transitional</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='backendModel'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>random</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>egd</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>builtin</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </rng>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <filesystem supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='driverType'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>path</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>handle</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>virtiofs</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </filesystem>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <tpm supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='model'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>tpm-tis</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>tpm-crb</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='backendModel'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>emulator</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>external</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='backendVersion'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>2.0</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </tpm>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <redirdev supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='bus'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>usb</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </redirdev>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <channel supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='type'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>pty</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>unix</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </channel>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <crypto supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='model'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='type'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>qemu</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='backendModel'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>builtin</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </crypto>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <interface supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='backendType'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>default</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>passt</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </interface>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <panic supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='model'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>isa</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>hyperv</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </panic>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <console supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='type'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>null</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>vc</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>pty</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>dev</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>file</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>pipe</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>stdio</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>udp</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>tcp</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>unix</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>qemu-vdagent</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>dbus</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </console>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  </devices>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <features>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <gic supported='no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <vmcoreinfo supported='yes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <genid supported='yes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <backingStoreInput supported='yes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <backup supported='yes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <async-teardown supported='yes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <ps2 supported='yes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <sev supported='no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <sgx supported='no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <hyperv supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='features'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>relaxed</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>vapic</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>spinlocks</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>vpindex</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>runtime</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>synic</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>stimer</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>reset</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>vendor_id</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>frequencies</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>reenlightenment</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>tlbflush</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>ipi</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>avic</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>emsr_bitmap</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>xmm_input</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <defaults>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <spinlocks>4095</spinlocks>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <stimer_direct>on</stimer_direct>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <tlbflush_direct>on</tlbflush_direct>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <tlbflush_extended>on</tlbflush_extended>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </defaults>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </hyperv>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <launchSecurity supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='sectype'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>tdx</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </launchSecurity>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  </features>
Dec  4 05:33:37 np0005545273 nova_compute[244644]: </domainCapabilities>
Dec  4 05:33:37 np0005545273 nova_compute[244644]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.199 244650 DEBUG nova.virt.libvirt.host [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec  4 05:33:37 np0005545273 nova_compute[244644]: <domainCapabilities>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <path>/usr/libexec/qemu-kvm</path>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <domain>kvm</domain>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <arch>i686</arch>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <vcpu max='240'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <iothreads supported='yes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <os supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <enum name='firmware'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <loader supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='type'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>rom</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>pflash</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='readonly'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>yes</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>no</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='secure'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>no</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </loader>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  </os>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <cpu>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <mode name='host-passthrough' supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='hostPassthroughMigratable'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>on</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>off</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </mode>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <mode name='maximum' supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='maximumMigratable'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>on</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>off</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </mode>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <mode name='host-model' supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <vendor>AMD</vendor>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='x2apic'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='tsc-deadline'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='hypervisor'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='tsc_adjust'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='spec-ctrl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='stibp'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='ssbd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='cmp_legacy'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='overflow-recov'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='succor'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='ibrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='amd-ssbd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='virt-ssbd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='lbrv'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='tsc-scale'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='vmcb-clean'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='flushbyasid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='pause-filter'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='pfthreshold'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='svme-addr-chk'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='disable' name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </mode>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <mode name='custom' supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Broadwell'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Broadwell-IBRS'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Broadwell-noTSX'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Broadwell-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Broadwell-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Broadwell-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Broadwell-v4'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cascadelake-Server'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cascadelake-Server-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cascadelake-Server-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cascadelake-Server-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cascadelake-Server-v4'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cascadelake-Server-v5'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cooperlake'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cooperlake-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cooperlake-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Denverton'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='mpx'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Denverton-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='mpx'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Denverton-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Denverton-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Dhyana-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-Genoa'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amd-psfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='auto-ibrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='no-nested-data-bp'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='null-sel-clr-base'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='stibp-always-on'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-Genoa-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amd-psfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='auto-ibrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='no-nested-data-bp'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='null-sel-clr-base'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='stibp-always-on'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-Milan'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-Milan-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-Milan-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amd-psfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='no-nested-data-bp'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='null-sel-clr-base'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='stibp-always-on'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-Rome'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-Rome-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-Rome-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-Rome-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-v4'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='GraniteRapids'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-int8'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-tile'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='bus-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fbsdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrc'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fzrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='mcdt-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pbrsb-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='prefetchiti'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='psdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='sbdr-ssdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='serialize'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='tsx-ldtrk'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='GraniteRapids-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-int8'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-tile'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='bus-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fbsdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrc'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fzrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='mcdt-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pbrsb-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='prefetchiti'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='psdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='sbdr-ssdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='serialize'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='tsx-ldtrk'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='GraniteRapids-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-int8'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-tile'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx10'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx10-128'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx10-256'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx10-512'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='bus-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='cldemote'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fbsdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrc'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fzrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='mcdt-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdir64b'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdiri'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pbrsb-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='prefetchiti'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='psdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='sbdr-ssdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='serialize'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='tsx-ldtrk'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Haswell'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Haswell-IBRS'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Haswell-noTSX'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Haswell-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Haswell-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Haswell-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Haswell-v4'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Icelake-Server'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Icelake-Server-noTSX'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Icelake-Server-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Icelake-Server-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Icelake-Server-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Icelake-Server-v4'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Icelake-Server-v5'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Icelake-Server-v6'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Icelake-Server-v7'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='IvyBridge'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='IvyBridge-IBRS'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='IvyBridge-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='IvyBridge-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='KnightsMill'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-4fmaps'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-4vnniw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512er'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512pf'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='KnightsMill-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-4fmaps'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-4vnniw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512er'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512pf'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Opteron_G4'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fma4'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xop'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Opteron_G4-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fma4'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xop'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Opteron_G5'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fma4'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='tbm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xop'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Opteron_G5-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fma4'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='tbm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xop'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='SapphireRapids'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-int8'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-tile'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='bus-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrc'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fzrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='serialize'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='tsx-ldtrk'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='SapphireRapids-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-int8'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-tile'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='bus-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrc'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fzrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='serialize'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='tsx-ldtrk'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='SapphireRapids-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-int8'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-tile'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='bus-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fbsdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrc'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fzrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='psdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='sbdr-ssdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='serialize'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='tsx-ldtrk'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='SapphireRapids-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-int8'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-tile'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='bus-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='cldemote'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fbsdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrc'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fzrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdir64b'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdiri'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='psdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='sbdr-ssdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='serialize'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='tsx-ldtrk'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='SierraForest'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-ne-convert'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni-int8'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='bus-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='cmpccxadd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fbsdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='mcdt-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pbrsb-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='psdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='sbdr-ssdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='serialize'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='SierraForest-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-ne-convert'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni-int8'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='bus-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='cmpccxadd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fbsdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='mcdt-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pbrsb-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='psdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='sbdr-ssdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='serialize'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Client'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Client-IBRS'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Client-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Client-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Client-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Client-v4'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Server'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Server-IBRS'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Server-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Server-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Server-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Server-v4'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Server-v5'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Snowridge'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='cldemote'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='core-capability'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdir64b'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdiri'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='mpx'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='split-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Snowridge-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='cldemote'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='core-capability'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdir64b'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdiri'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='mpx'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='split-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Snowridge-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='cldemote'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='core-capability'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdir64b'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdiri'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='split-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Snowridge-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='cldemote'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='core-capability'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdir64b'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdiri'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='split-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Snowridge-v4'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='cldemote'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdir64b'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdiri'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='athlon'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='3dnow'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='3dnowext'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='athlon-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='3dnow'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='3dnowext'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='core2duo'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='core2duo-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='coreduo'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='coreduo-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='n270'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='n270-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='phenom'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='3dnow'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='3dnowext'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='phenom-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='3dnow'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='3dnowext'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </mode>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  </cpu>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <memoryBacking supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <enum name='sourceType'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <value>file</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <value>anonymous</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <value>memfd</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  </memoryBacking>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <devices>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <disk supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='diskDevice'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>disk</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>cdrom</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>floppy</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>lun</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='bus'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>ide</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>fdc</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>scsi</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>virtio</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>usb</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>sata</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='model'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>virtio</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>virtio-transitional</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>virtio-non-transitional</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </disk>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <graphics supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='type'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>vnc</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>egl-headless</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>dbus</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </graphics>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <video supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='modelType'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>vga</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>cirrus</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>virtio</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>none</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>bochs</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>ramfb</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </video>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <hostdev supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='mode'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>subsystem</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='startupPolicy'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>default</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>mandatory</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>requisite</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>optional</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='subsysType'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>usb</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>pci</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>scsi</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='capsType'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='pciBackend'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </hostdev>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <rng supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='model'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>virtio</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>virtio-transitional</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>virtio-non-transitional</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='backendModel'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>random</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>egd</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>builtin</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </rng>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <filesystem supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='driverType'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>path</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>handle</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>virtiofs</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </filesystem>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <tpm supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='model'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>tpm-tis</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>tpm-crb</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='backendModel'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>emulator</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>external</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='backendVersion'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>2.0</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </tpm>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <redirdev supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='bus'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>usb</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </redirdev>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <channel supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='type'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>pty</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>unix</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </channel>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <crypto supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='model'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='type'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>qemu</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='backendModel'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>builtin</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </crypto>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <interface supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='backendType'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>default</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>passt</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </interface>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <panic supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='model'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>isa</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>hyperv</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </panic>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <console supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='type'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>null</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>vc</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>pty</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>dev</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>file</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>pipe</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>stdio</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>udp</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>tcp</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>unix</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>qemu-vdagent</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>dbus</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </console>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  </devices>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <features>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <gic supported='no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <vmcoreinfo supported='yes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <genid supported='yes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <backingStoreInput supported='yes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <backup supported='yes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <async-teardown supported='yes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <ps2 supported='yes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <sev supported='no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <sgx supported='no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <hyperv supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='features'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>relaxed</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>vapic</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>spinlocks</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>vpindex</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>runtime</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>synic</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>stimer</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>reset</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>vendor_id</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>frequencies</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>reenlightenment</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>tlbflush</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>ipi</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>avic</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>emsr_bitmap</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>xmm_input</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <defaults>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <spinlocks>4095</spinlocks>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <stimer_direct>on</stimer_direct>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <tlbflush_direct>on</tlbflush_direct>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <tlbflush_extended>on</tlbflush_extended>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </defaults>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </hyperv>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <launchSecurity supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='sectype'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>tdx</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </launchSecurity>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  </features>
Dec  4 05:33:37 np0005545273 nova_compute[244644]: </domainCapabilities>
Dec  4 05:33:37 np0005545273 nova_compute[244644]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.240 244650 DEBUG nova.virt.libvirt.host [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.244 244650 DEBUG nova.virt.libvirt.host [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec  4 05:33:37 np0005545273 nova_compute[244644]: <domainCapabilities>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <path>/usr/libexec/qemu-kvm</path>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <domain>kvm</domain>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <arch>x86_64</arch>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <vcpu max='4096'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <iothreads supported='yes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <os supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <enum name='firmware'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <value>efi</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <loader supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='type'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>rom</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>pflash</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='readonly'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>yes</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>no</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='secure'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>yes</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>no</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </loader>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  </os>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <cpu>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <mode name='host-passthrough' supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='hostPassthroughMigratable'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>on</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>off</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </mode>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <mode name='maximum' supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='maximumMigratable'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>on</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>off</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </mode>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <mode name='host-model' supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <vendor>AMD</vendor>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='x2apic'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='tsc-deadline'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='hypervisor'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='tsc_adjust'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='spec-ctrl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='stibp'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='ssbd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='cmp_legacy'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='overflow-recov'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='succor'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='ibrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='amd-ssbd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='virt-ssbd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='lbrv'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='tsc-scale'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='vmcb-clean'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='flushbyasid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='pause-filter'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='pfthreshold'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='svme-addr-chk'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='disable' name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </mode>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <mode name='custom' supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Broadwell'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Broadwell-IBRS'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Broadwell-noTSX'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Broadwell-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Broadwell-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Broadwell-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Broadwell-v4'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cascadelake-Server'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cascadelake-Server-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cascadelake-Server-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cascadelake-Server-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cascadelake-Server-v4'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cascadelake-Server-v5'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cooperlake'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cooperlake-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cooperlake-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Denverton'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='mpx'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Denverton-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='mpx'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Denverton-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Denverton-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Dhyana-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-Genoa'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amd-psfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='auto-ibrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='no-nested-data-bp'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='null-sel-clr-base'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='stibp-always-on'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-Genoa-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amd-psfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='auto-ibrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='no-nested-data-bp'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='null-sel-clr-base'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='stibp-always-on'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-Milan'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-Milan-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-Milan-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amd-psfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='no-nested-data-bp'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='null-sel-clr-base'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='stibp-always-on'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-Rome'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-Rome-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-Rome-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-Rome-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-v4'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='GraniteRapids'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-int8'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-tile'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='bus-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fbsdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrc'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fzrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='mcdt-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pbrsb-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='prefetchiti'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='psdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='sbdr-ssdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='serialize'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='tsx-ldtrk'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='GraniteRapids-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-int8'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-tile'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='bus-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fbsdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrc'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fzrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='mcdt-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pbrsb-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='prefetchiti'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='psdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='sbdr-ssdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='serialize'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='tsx-ldtrk'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='GraniteRapids-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-int8'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-tile'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx10'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx10-128'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx10-256'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx10-512'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='bus-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='cldemote'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fbsdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrc'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fzrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='mcdt-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdir64b'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdiri'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pbrsb-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='prefetchiti'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='psdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='sbdr-ssdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='serialize'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='tsx-ldtrk'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Haswell'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Haswell-IBRS'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Haswell-noTSX'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Haswell-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Haswell-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Haswell-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Haswell-v4'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Icelake-Server'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Icelake-Server-noTSX'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Icelake-Server-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Icelake-Server-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Icelake-Server-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Icelake-Server-v4'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Icelake-Server-v5'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Icelake-Server-v6'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Icelake-Server-v7'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='IvyBridge'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='IvyBridge-IBRS'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='IvyBridge-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='IvyBridge-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='KnightsMill'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-4fmaps'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-4vnniw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512er'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512pf'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='KnightsMill-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-4fmaps'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-4vnniw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512er'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512pf'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Opteron_G4'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fma4'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xop'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Opteron_G4-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fma4'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xop'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Opteron_G5'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fma4'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='tbm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xop'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Opteron_G5-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fma4'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='tbm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xop'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='SapphireRapids'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-int8'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-tile'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='bus-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrc'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fzrm'/>
Dec  4 05:33:37 np0005545273 podman[245308]: 2025-12-04 10:33:37.269543413 +0000 UTC m=+0.022452655 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='serialize'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='tsx-ldtrk'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='SapphireRapids-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-int8'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-tile'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='bus-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrc'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fzrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='serialize'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='tsx-ldtrk'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='SapphireRapids-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-int8'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-tile'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='bus-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fbsdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrc'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fzrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='psdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='sbdr-ssdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='serialize'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='tsx-ldtrk'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='SapphireRapids-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-int8'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-tile'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='bus-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='cldemote'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fbsdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrc'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fzrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdir64b'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdiri'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='psdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='sbdr-ssdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='serialize'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='tsx-ldtrk'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='SierraForest'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-ne-convert'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni-int8'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='bus-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='cmpccxadd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fbsdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='mcdt-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pbrsb-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='psdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='sbdr-ssdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='serialize'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='SierraForest-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-ne-convert'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni-int8'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='bus-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='cmpccxadd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fbsdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='mcdt-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pbrsb-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='psdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='sbdr-ssdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='serialize'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Client'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Client-IBRS'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Client-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Client-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Client-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Client-v4'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Server'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Server-IBRS'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Server-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Server-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Server-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Server-v4'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Server-v5'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Snowridge'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='cldemote'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='core-capability'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdir64b'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdiri'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='mpx'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='split-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Snowridge-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='cldemote'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='core-capability'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdir64b'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdiri'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='mpx'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='split-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Snowridge-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='cldemote'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='core-capability'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdir64b'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdiri'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='split-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Snowridge-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='cldemote'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='core-capability'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdir64b'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdiri'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='split-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Snowridge-v4'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='cldemote'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdir64b'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdiri'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='athlon'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='3dnow'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='3dnowext'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='athlon-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='3dnow'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='3dnowext'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='core2duo'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='core2duo-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='coreduo'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='coreduo-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='n270'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='n270-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='phenom'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='3dnow'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='3dnowext'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='phenom-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='3dnow'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='3dnowext'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </mode>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  </cpu>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <memoryBacking supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <enum name='sourceType'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <value>file</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <value>anonymous</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <value>memfd</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  </memoryBacking>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <devices>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <disk supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='diskDevice'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>disk</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>cdrom</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>floppy</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>lun</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='bus'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>fdc</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>scsi</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>virtio</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>usb</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>sata</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='model'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>virtio</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>virtio-transitional</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>virtio-non-transitional</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </disk>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <graphics supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='type'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>vnc</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>egl-headless</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>dbus</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </graphics>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <video supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='modelType'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>vga</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>cirrus</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>virtio</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>none</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>bochs</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>ramfb</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </video>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <hostdev supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='mode'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>subsystem</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='startupPolicy'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>default</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>mandatory</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>requisite</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>optional</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='subsysType'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>usb</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>pci</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>scsi</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='capsType'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='pciBackend'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </hostdev>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <rng supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='model'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>virtio</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>virtio-transitional</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>virtio-non-transitional</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='backendModel'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>random</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>egd</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>builtin</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </rng>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <filesystem supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='driverType'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>path</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>handle</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>virtiofs</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </filesystem>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <tpm supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='model'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>tpm-tis</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>tpm-crb</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='backendModel'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>emulator</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>external</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='backendVersion'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>2.0</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </tpm>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <redirdev supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='bus'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>usb</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </redirdev>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <channel supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='type'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>pty</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>unix</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </channel>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <crypto supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='model'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='type'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>qemu</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='backendModel'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>builtin</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </crypto>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <interface supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='backendType'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>default</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>passt</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </interface>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <panic supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='model'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>isa</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>hyperv</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </panic>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <console supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='type'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>null</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>vc</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>pty</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>dev</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>file</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>pipe</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>stdio</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>udp</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>tcp</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>unix</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>qemu-vdagent</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>dbus</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </console>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  </devices>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <features>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <gic supported='no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <vmcoreinfo supported='yes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <genid supported='yes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <backingStoreInput supported='yes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <backup supported='yes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <async-teardown supported='yes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <ps2 supported='yes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <sev supported='no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <sgx supported='no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <hyperv supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='features'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>relaxed</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>vapic</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>spinlocks</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>vpindex</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>runtime</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>synic</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>stimer</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>reset</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>vendor_id</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>frequencies</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>reenlightenment</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>tlbflush</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>ipi</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>avic</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>emsr_bitmap</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>xmm_input</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <defaults>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <spinlocks>4095</spinlocks>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <stimer_direct>on</stimer_direct>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <tlbflush_direct>on</tlbflush_direct>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <tlbflush_extended>on</tlbflush_extended>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </defaults>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </hyperv>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <launchSecurity supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='sectype'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>tdx</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </launchSecurity>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  </features>
Dec  4 05:33:37 np0005545273 nova_compute[244644]: </domainCapabilities>
Dec  4 05:33:37 np0005545273 nova_compute[244644]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.313 244650 DEBUG nova.virt.libvirt.host [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec  4 05:33:37 np0005545273 nova_compute[244644]: <domainCapabilities>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <path>/usr/libexec/qemu-kvm</path>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <domain>kvm</domain>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <arch>x86_64</arch>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <vcpu max='240'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <iothreads supported='yes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <os supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <enum name='firmware'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <loader supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='type'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>rom</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>pflash</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='readonly'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>yes</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>no</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='secure'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>no</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </loader>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  </os>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <cpu>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <mode name='host-passthrough' supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='hostPassthroughMigratable'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>on</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>off</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </mode>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <mode name='maximum' supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='maximumMigratable'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>on</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>off</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </mode>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <mode name='host-model' supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <vendor>AMD</vendor>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='x2apic'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='tsc-deadline'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='hypervisor'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='tsc_adjust'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='spec-ctrl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='stibp'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='ssbd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='cmp_legacy'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='overflow-recov'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='succor'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='ibrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='amd-ssbd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='virt-ssbd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='lbrv'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='tsc-scale'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='vmcb-clean'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='flushbyasid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='pause-filter'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='pfthreshold'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='svme-addr-chk'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <feature policy='disable' name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </mode>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <mode name='custom' supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Broadwell'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Broadwell-IBRS'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Broadwell-noTSX'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Broadwell-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Broadwell-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Broadwell-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Broadwell-v4'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cascadelake-Server'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cascadelake-Server-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cascadelake-Server-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cascadelake-Server-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cascadelake-Server-v4'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cascadelake-Server-v5'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cooperlake'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cooperlake-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Cooperlake-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Denverton'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='mpx'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Denverton-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='mpx'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Denverton-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Denverton-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Dhyana-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-Genoa'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amd-psfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='auto-ibrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='no-nested-data-bp'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='null-sel-clr-base'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='stibp-always-on'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-Genoa-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amd-psfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='auto-ibrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='no-nested-data-bp'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='null-sel-clr-base'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='stibp-always-on'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-Milan'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-Milan-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-Milan-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amd-psfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='no-nested-data-bp'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='null-sel-clr-base'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='stibp-always-on'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-Rome'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-Rome-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-Rome-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-Rome-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='EPYC-v4'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='GraniteRapids'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-int8'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-tile'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='bus-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fbsdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrc'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fzrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='mcdt-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pbrsb-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='prefetchiti'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='psdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='sbdr-ssdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='serialize'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='tsx-ldtrk'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='GraniteRapids-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-int8'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-tile'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='bus-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fbsdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrc'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fzrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='mcdt-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pbrsb-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='prefetchiti'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='psdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='sbdr-ssdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='serialize'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='tsx-ldtrk'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='GraniteRapids-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-int8'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-tile'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx10'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx10-128'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx10-256'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx10-512'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='bus-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='cldemote'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fbsdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrc'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fzrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='mcdt-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdir64b'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdiri'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pbrsb-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='prefetchiti'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='psdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='sbdr-ssdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='serialize'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='tsx-ldtrk'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Haswell'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Haswell-IBRS'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Haswell-noTSX'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Haswell-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Haswell-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 podman[245308]: 2025-12-04 10:33:37.42224982 +0000 UTC m=+0.175159042 container create 4f5e2518863149d41ba8f2addfb33bfc481094a9e8c0e381b747a48a1710c80a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bhabha, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Haswell-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Haswell-v4'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Icelake-Server'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Icelake-Server-noTSX'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Icelake-Server-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Icelake-Server-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Icelake-Server-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Icelake-Server-v4'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Icelake-Server-v5'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Icelake-Server-v6'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Icelake-Server-v7'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='IvyBridge'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='IvyBridge-IBRS'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='IvyBridge-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='IvyBridge-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='KnightsMill'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-4fmaps'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-4vnniw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512er'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512pf'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='KnightsMill-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-4fmaps'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-4vnniw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512er'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512pf'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Opteron_G4'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fma4'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xop'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Opteron_G4-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fma4'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xop'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Opteron_G5'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fma4'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='tbm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xop'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Opteron_G5-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fma4'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='tbm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xop'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='SapphireRapids'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-int8'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-tile'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='bus-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrc'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fzrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='serialize'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='tsx-ldtrk'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='SapphireRapids-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-int8'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-tile'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='bus-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrc'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fzrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='serialize'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='tsx-ldtrk'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='SapphireRapids-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-int8'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-tile'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='bus-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fbsdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrc'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fzrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='psdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='sbdr-ssdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='serialize'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='tsx-ldtrk'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='SapphireRapids-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-int8'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='amx-tile'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-bf16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-fp16'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512-vpopcntdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bitalg'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vbmi2'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='bus-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='cldemote'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fbsdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrc'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fzrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='la57'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdir64b'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdiri'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='psdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='sbdr-ssdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='serialize'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='taa-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='tsx-ldtrk'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xfd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='SierraForest'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-ne-convert'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni-int8'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='bus-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='cmpccxadd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fbsdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='mcdt-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pbrsb-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='psdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='sbdr-ssdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='serialize'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='SierraForest-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-ifma'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-ne-convert'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx-vnni-int8'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='bus-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='cmpccxadd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fbsdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='fsrs'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ibrs-all'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='mcdt-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pbrsb-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='psdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='sbdr-ssdp-no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='serialize'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vaes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='vpclmulqdq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Client'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Client-IBRS'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Client-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Client-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Client-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Client-v4'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Server'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Server-IBRS'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Server-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Server-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='hle'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='rtm'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Server-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Server-v4'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Skylake-Server-v5'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512bw'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512cd'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512dq'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512f'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='avx512vl'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='invpcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pcid'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='pku'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Snowridge'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='cldemote'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='core-capability'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdir64b'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdiri'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='mpx'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='split-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Snowridge-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='cldemote'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='core-capability'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdir64b'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdiri'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='mpx'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='split-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Snowridge-v2'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='cldemote'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='core-capability'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdir64b'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdiri'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='split-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Snowridge-v3'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='cldemote'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='core-capability'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdir64b'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdiri'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='split-lock-detect'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='Snowridge-v4'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='cldemote'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='erms'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='gfni'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdir64b'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='movdiri'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='xsaves'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='athlon'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='3dnow'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='3dnowext'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='athlon-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='3dnow'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='3dnowext'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='core2duo'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='core2duo-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='coreduo'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='coreduo-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='n270'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='n270-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='ss'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='phenom'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='3dnow'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='3dnowext'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <blockers model='phenom-v1'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='3dnow'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <feature name='3dnowext'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </blockers>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </mode>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  </cpu>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <memoryBacking supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <enum name='sourceType'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <value>file</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <value>anonymous</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <value>memfd</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  </memoryBacking>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <devices>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <disk supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='diskDevice'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>disk</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>cdrom</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>floppy</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>lun</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='bus'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>ide</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>fdc</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>scsi</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>virtio</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>usb</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>sata</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='model'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>virtio</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>virtio-transitional</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>virtio-non-transitional</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </disk>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <graphics supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='type'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>vnc</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>egl-headless</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>dbus</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </graphics>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <video supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='modelType'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>vga</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>cirrus</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>virtio</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>none</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>bochs</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>ramfb</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </video>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <hostdev supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='mode'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>subsystem</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='startupPolicy'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>default</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>mandatory</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>requisite</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>optional</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='subsysType'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>usb</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>pci</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>scsi</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='capsType'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='pciBackend'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </hostdev>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <rng supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='model'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>virtio</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>virtio-transitional</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>virtio-non-transitional</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='backendModel'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>random</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>egd</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>builtin</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </rng>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <filesystem supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='driverType'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>path</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>handle</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>virtiofs</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </filesystem>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <tpm supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='model'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>tpm-tis</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>tpm-crb</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='backendModel'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>emulator</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>external</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='backendVersion'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>2.0</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </tpm>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <redirdev supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='bus'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>usb</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </redirdev>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <channel supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='type'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>pty</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>unix</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </channel>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <crypto supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='model'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='type'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>qemu</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='backendModel'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>builtin</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </crypto>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <interface supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='backendType'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>default</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>passt</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </interface>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <panic supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='model'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>isa</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>hyperv</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </panic>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <console supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='type'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>null</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>vc</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>pty</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>dev</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>file</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>pipe</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>stdio</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>udp</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>tcp</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>unix</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>qemu-vdagent</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>dbus</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </console>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  </devices>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  <features>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <gic supported='no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <vmcoreinfo supported='yes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <genid supported='yes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <backingStoreInput supported='yes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <backup supported='yes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <async-teardown supported='yes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <ps2 supported='yes'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <sev supported='no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <sgx supported='no'/>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <hyperv supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='features'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>relaxed</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>vapic</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>spinlocks</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>vpindex</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>runtime</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>synic</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>stimer</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>reset</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>vendor_id</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>frequencies</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>reenlightenment</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>tlbflush</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>ipi</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>avic</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>emsr_bitmap</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>xmm_input</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <defaults>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <spinlocks>4095</spinlocks>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <stimer_direct>on</stimer_direct>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <tlbflush_direct>on</tlbflush_direct>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <tlbflush_extended>on</tlbflush_extended>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </defaults>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </hyperv>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    <launchSecurity supported='yes'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      <enum name='sectype'>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:        <value>tdx</value>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:      </enum>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:    </launchSecurity>
Dec  4 05:33:37 np0005545273 nova_compute[244644]:  </features>
Dec  4 05:33:37 np0005545273 nova_compute[244644]: </domainCapabilities>
Dec  4 05:33:37 np0005545273 nova_compute[244644]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.387 244650 DEBUG nova.virt.libvirt.host [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.388 244650 INFO nova.virt.libvirt.host [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Secure Boot support detected#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.390 244650 INFO nova.virt.libvirt.driver [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.390 244650 INFO nova.virt.libvirt.driver [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.398 244650 DEBUG nova.virt.libvirt.driver [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.447 244650 INFO nova.virt.node [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Determined node identity 39e18386-dcd4-4a7a-8441-091a9ba1f70f from /var/lib/nova/compute_id#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.466 244650 WARNING nova.compute.manager [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Compute nodes ['39e18386-dcd4-4a7a-8441-091a9ba1f70f'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Dec  4 05:33:37 np0005545273 systemd[1]: Started libpod-conmon-4f5e2518863149d41ba8f2addfb33bfc481094a9e8c0e381b747a48a1710c80a.scope.
Dec  4 05:33:37 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.528 244650 INFO nova.compute.manager [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Dec  4 05:33:37 np0005545273 podman[245308]: 2025-12-04 10:33:37.549924781 +0000 UTC m=+0.302834033 container init 4f5e2518863149d41ba8f2addfb33bfc481094a9e8c0e381b747a48a1710c80a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec  4 05:33:37 np0005545273 podman[245308]: 2025-12-04 10:33:37.558590922 +0000 UTC m=+0.311500144 container start 4f5e2518863149d41ba8f2addfb33bfc481094a9e8c0e381b747a48a1710c80a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:33:37 np0005545273 laughing_bhabha[245325]: 167 167
Dec  4 05:33:37 np0005545273 systemd[1]: libpod-4f5e2518863149d41ba8f2addfb33bfc481094a9e8c0e381b747a48a1710c80a.scope: Deactivated successfully.
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.570 244650 WARNING nova.compute.manager [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.571 244650 DEBUG oslo_concurrency.lockutils [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.571 244650 DEBUG oslo_concurrency.lockutils [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.571 244650 DEBUG oslo_concurrency.lockutils [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.571 244650 DEBUG nova.compute.resource_tracker [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  4 05:33:37 np0005545273 nova_compute[244644]: 2025-12-04 10:33:37.572 244650 DEBUG oslo_concurrency.processutils [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:33:37 np0005545273 podman[245308]: 2025-12-04 10:33:37.662807145 +0000 UTC m=+0.415716467 container attach 4f5e2518863149d41ba8f2addfb33bfc481094a9e8c0e381b747a48a1710c80a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:33:37 np0005545273 podman[245308]: 2025-12-04 10:33:37.664514066 +0000 UTC m=+0.417423328 container died 4f5e2518863149d41ba8f2addfb33bfc481094a9e8c0e381b747a48a1710c80a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:33:37 np0005545273 systemd[1]: var-lib-containers-storage-overlay-74540bad621965b17a4be5e1748ad040bbf67021841ed96ff2b79c855e756489-merged.mount: Deactivated successfully.
Dec  4 05:33:37 np0005545273 podman[245308]: 2025-12-04 10:33:37.832431291 +0000 UTC m=+0.585340513 container remove 4f5e2518863149d41ba8f2addfb33bfc481094a9e8c0e381b747a48a1710c80a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bhabha, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:33:37 np0005545273 systemd[1]: libpod-conmon-4f5e2518863149d41ba8f2addfb33bfc481094a9e8c0e381b747a48a1710c80a.scope: Deactivated successfully.
Dec  4 05:33:38 np0005545273 podman[245368]: 2025-12-04 10:33:38.004652891 +0000 UTC m=+0.046834094 container create 2113069572ae9440431c9bd755a0ac0eeba8f8540366faac52c391d9ead13cd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_nightingale, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec  4 05:33:38 np0005545273 systemd[1]: Started libpod-conmon-2113069572ae9440431c9bd755a0ac0eeba8f8540366faac52c391d9ead13cd6.scope.
Dec  4 05:33:38 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:33:38 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1733d732741cb462462f9fe1be93018a6354f7df68e32ce76d1ee305deec68/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:33:38 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1733d732741cb462462f9fe1be93018a6354f7df68e32ce76d1ee305deec68/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:33:38 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1733d732741cb462462f9fe1be93018a6354f7df68e32ce76d1ee305deec68/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:33:38 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1733d732741cb462462f9fe1be93018a6354f7df68e32ce76d1ee305deec68/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:33:38 np0005545273 podman[245368]: 2025-12-04 10:33:38.07853089 +0000 UTC m=+0.120712113 container init 2113069572ae9440431c9bd755a0ac0eeba8f8540366faac52c391d9ead13cd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:33:38 np0005545273 podman[245368]: 2025-12-04 10:33:37.984087564 +0000 UTC m=+0.026268777 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:33:38 np0005545273 podman[245368]: 2025-12-04 10:33:38.086448802 +0000 UTC m=+0.128630015 container start 2113069572ae9440431c9bd755a0ac0eeba8f8540366faac52c391d9ead13cd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_nightingale, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec  4 05:33:38 np0005545273 podman[245368]: 2025-12-04 10:33:38.089600488 +0000 UTC m=+0.131781701 container attach 2113069572ae9440431c9bd755a0ac0eeba8f8540366faac52c391d9ead13cd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec  4 05:33:38 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:33:38 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3277150800' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:33:38 np0005545273 nova_compute[244644]: 2025-12-04 10:33:38.112 244650 DEBUG oslo_concurrency.processutils [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:33:38 np0005545273 systemd[1]: Starting libvirt nodedev daemon...
Dec  4 05:33:38 np0005545273 systemd[1]: Started libvirt nodedev daemon.
Dec  4 05:33:38 np0005545273 nova_compute[244644]: 2025-12-04 10:33:38.429 244650 WARNING nova.virt.libvirt.driver [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  4 05:33:38 np0005545273 nova_compute[244644]: 2025-12-04 10:33:38.431 244650 DEBUG nova.compute.resource_tracker [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5115MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  4 05:33:38 np0005545273 nova_compute[244644]: 2025-12-04 10:33:38.431 244650 DEBUG oslo_concurrency.lockutils [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:33:38 np0005545273 nova_compute[244644]: 2025-12-04 10:33:38.432 244650 DEBUG oslo_concurrency.lockutils [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:33:38 np0005545273 nova_compute[244644]: 2025-12-04 10:33:38.453 244650 WARNING nova.compute.resource_tracker [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] No compute node record for compute-0.ctlplane.example.com:39e18386-dcd4-4a7a-8441-091a9ba1f70f: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 39e18386-dcd4-4a7a-8441-091a9ba1f70f could not be found.#033[00m
Dec  4 05:33:38 np0005545273 nova_compute[244644]: 2025-12-04 10:33:38.475 244650 INFO nova.compute.resource_tracker [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 39e18386-dcd4-4a7a-8441-091a9ba1f70f#033[00m
Dec  4 05:33:38 np0005545273 nova_compute[244644]: 2025-12-04 10:33:38.552 244650 DEBUG nova.compute.resource_tracker [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  4 05:33:38 np0005545273 nova_compute[244644]: 2025-12-04 10:33:38.553 244650 DEBUG nova.compute.resource_tracker [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  4 05:33:38 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:33:38 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Dec  4 05:33:38 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:33:38.623984) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  4 05:33:38 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Dec  4 05:33:38 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844418624031, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1330, "num_deletes": 505, "total_data_size": 1639607, "memory_usage": 1669840, "flush_reason": "Manual Compaction"}
Dec  4 05:33:38 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Dec  4 05:33:38 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844418642472, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1624331, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13538, "largest_seqno": 14867, "table_properties": {"data_size": 1618432, "index_size": 2783, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 14756, "raw_average_key_size": 18, "raw_value_size": 1604682, "raw_average_value_size": 1959, "num_data_blocks": 127, "num_entries": 819, "num_filter_entries": 819, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764844309, "oldest_key_time": 1764844309, "file_creation_time": 1764844418, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:33:38 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 18543 microseconds, and 4857 cpu microseconds.
Dec  4 05:33:38 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  4 05:33:38 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:33:38.642527) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1624331 bytes OK
Dec  4 05:33:38 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:33:38.642559) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Dec  4 05:33:38 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:33:38.644502) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Dec  4 05:33:38 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:33:38.644524) EVENT_LOG_v1 {"time_micros": 1764844418644518, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  4 05:33:38 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:33:38.644547) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  4 05:33:38 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1632576, prev total WAL file size 1632576, number of live WAL files 2.
Dec  4 05:33:38 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:33:38 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:33:38.645288) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323531' seq:0, type:0; will stop at (end)
Dec  4 05:33:38 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  4 05:33:38 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1586KB)], [32(7444KB)]
Dec  4 05:33:38 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844418645330, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9247553, "oldest_snapshot_seqno": -1}
Dec  4 05:33:38 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3837 keys, 7307396 bytes, temperature: kUnknown
Dec  4 05:33:38 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844418700120, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7307396, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7280129, "index_size": 16597, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9605, "raw_key_size": 94119, "raw_average_key_size": 24, "raw_value_size": 7208980, "raw_average_value_size": 1878, "num_data_blocks": 703, "num_entries": 3837, "num_filter_entries": 3837, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 1764844418, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:33:38 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  4 05:33:38 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:33:38.700427) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7307396 bytes
Dec  4 05:33:38 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:33:38.702372) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 168.4 rd, 133.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 7.3 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(10.2) write-amplify(4.5) OK, records in: 4860, records dropped: 1023 output_compression: NoCompression
Dec  4 05:33:38 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:33:38.702393) EVENT_LOG_v1 {"time_micros": 1764844418702383, "job": 14, "event": "compaction_finished", "compaction_time_micros": 54923, "compaction_time_cpu_micros": 18861, "output_level": 6, "num_output_files": 1, "total_output_size": 7307396, "num_input_records": 4860, "num_output_records": 3837, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  4 05:33:38 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:33:38 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844418702960, "job": 14, "event": "table_file_deletion", "file_number": 34}
Dec  4 05:33:38 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:33:38 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844418704536, "job": 14, "event": "table_file_deletion", "file_number": 32}
Dec  4 05:33:38 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:33:38.645193) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:33:38 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:33:38.704646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:33:38 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:33:38.704653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:33:38 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:33:38.704655) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:33:38 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:33:38.704657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:33:38 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:33:38.704659) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:33:38 np0005545273 lvm[245488]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:33:38 np0005545273 lvm[245489]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:33:38 np0005545273 lvm[245489]: VG ceph_vg1 finished
Dec  4 05:33:38 np0005545273 lvm[245488]: VG ceph_vg0 finished
Dec  4 05:33:38 np0005545273 lvm[245491]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:33:38 np0005545273 lvm[245491]: VG ceph_vg2 finished
Dec  4 05:33:38 np0005545273 admiring_nightingale[245385]: {}
Dec  4 05:33:39 np0005545273 systemd[1]: libpod-2113069572ae9440431c9bd755a0ac0eeba8f8540366faac52c391d9ead13cd6.scope: Deactivated successfully.
Dec  4 05:33:39 np0005545273 systemd[1]: libpod-2113069572ae9440431c9bd755a0ac0eeba8f8540366faac52c391d9ead13cd6.scope: Consumed 1.412s CPU time.
Dec  4 05:33:39 np0005545273 podman[245368]: 2025-12-04 10:33:39.009324226 +0000 UTC m=+1.051505439 container died 2113069572ae9440431c9bd755a0ac0eeba8f8540366faac52c391d9ead13cd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:33:39 np0005545273 systemd[1]: var-lib-containers-storage-overlay-ff1733d732741cb462462f9fe1be93018a6354f7df68e32ce76d1ee305deec68-merged.mount: Deactivated successfully.
Dec  4 05:33:39 np0005545273 podman[245368]: 2025-12-04 10:33:39.060497705 +0000 UTC m=+1.102678918 container remove 2113069572ae9440431c9bd755a0ac0eeba8f8540366faac52c391d9ead13cd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_nightingale, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030)
Dec  4 05:33:39 np0005545273 systemd[1]: libpod-conmon-2113069572ae9440431c9bd755a0ac0eeba8f8540366faac52c391d9ead13cd6.scope: Deactivated successfully.
Dec  4 05:33:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:33:39 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:33:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:33:39 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:33:39 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v684: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:33:39 np0005545273 nova_compute[244644]: 2025-12-04 10:33:39.437 244650 INFO nova.scheduler.client.report [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] [req-a5c4e75f-5bd8-407c-9f38-fa4768b53063] Created resource provider record via placement API for resource provider with UUID 39e18386-dcd4-4a7a-8441-091a9ba1f70f and name compute-0.ctlplane.example.com.#033[00m
Dec  4 05:33:39 np0005545273 nova_compute[244644]: 2025-12-04 10:33:39.861 244650 DEBUG oslo_concurrency.processutils [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:33:40 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:33:40 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:33:40 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:33:40 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2529618897' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:33:40 np0005545273 nova_compute[244644]: 2025-12-04 10:33:40.404 244650 DEBUG oslo_concurrency.processutils [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:33:40 np0005545273 nova_compute[244644]: 2025-12-04 10:33:40.412 244650 DEBUG nova.virt.libvirt.host [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Dec  4 05:33:40 np0005545273 nova_compute[244644]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Dec  4 05:33:40 np0005545273 nova_compute[244644]: 2025-12-04 10:33:40.413 244650 INFO nova.virt.libvirt.host [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] kernel doesn't support AMD SEV#033[00m
Dec  4 05:33:40 np0005545273 nova_compute[244644]: 2025-12-04 10:33:40.414 244650 DEBUG nova.compute.provider_tree [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Updating inventory in ProviderTree for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  4 05:33:40 np0005545273 nova_compute[244644]: 2025-12-04 10:33:40.414 244650 DEBUG nova.virt.libvirt.driver [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  4 05:33:40 np0005545273 nova_compute[244644]: 2025-12-04 10:33:40.485 244650 DEBUG nova.scheduler.client.report [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Updated inventory for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Dec  4 05:33:40 np0005545273 nova_compute[244644]: 2025-12-04 10:33:40.486 244650 DEBUG nova.compute.provider_tree [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Updating resource provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec  4 05:33:40 np0005545273 nova_compute[244644]: 2025-12-04 10:33:40.486 244650 DEBUG nova.compute.provider_tree [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Updating inventory in ProviderTree for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  4 05:33:40 np0005545273 nova_compute[244644]: 2025-12-04 10:33:40.608 244650 DEBUG nova.compute.provider_tree [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Updating resource provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec  4 05:33:40 np0005545273 nova_compute[244644]: 2025-12-04 10:33:40.637 244650 DEBUG nova.compute.resource_tracker [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  4 05:33:40 np0005545273 nova_compute[244644]: 2025-12-04 10:33:40.637 244650 DEBUG oslo_concurrency.lockutils [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.206s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:33:40 np0005545273 nova_compute[244644]: 2025-12-04 10:33:40.638 244650 DEBUG nova.service [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Dec  4 05:33:40 np0005545273 nova_compute[244644]: 2025-12-04 10:33:40.742 244650 DEBUG nova.service [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Dec  4 05:33:40 np0005545273 nova_compute[244644]: 2025-12-04 10:33:40.742 244650 DEBUG nova.servicegroup.drivers.db [None req-f3aec9cf-1b2b-418e-8bb0-e289dedacd85 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Dec  4 05:33:41 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v685: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:33:43 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v686: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:33:43 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:33:45 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v687: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:33:47 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v688: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:33:48 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:33:48 np0005545273 podman[245553]: 2025-12-04 10:33:48.969907828 +0000 UTC m=+0.072485076 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  4 05:33:49 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v689: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:33:51 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v690: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:33:53 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v691: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:33:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:33:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:33:54.899 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:33:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:33:54.901 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:33:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:33:54.901 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:33:55 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v692: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:33:57 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v693: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:33:57 np0005545273 podman[245577]: 2025-12-04 10:33:57.907159315 +0000 UTC m=+0.138211858 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:33:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:33:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:33:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:33:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:33:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:33:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:33:58 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:33:58 np0005545273 podman[245603]: 2025-12-04 10:33:58.939674473 +0000 UTC m=+0.043790781 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  4 05:33:59 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v694: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:34:01 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v695: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:34:02 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  4 05:34:02 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/234979890' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec  4 05:34:02 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  4 05:34:02 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/234979890' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec  4 05:34:02 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  4 05:34:02 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2673190191' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec  4 05:34:02 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  4 05:34:02 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2673190191' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec  4 05:34:03 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v696: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:34:03 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  4 05:34:03 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2007810801' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec  4 05:34:03 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  4 05:34:03 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2007810801' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec  4 05:34:03 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:34:05 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v697: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:34:06 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  4 05:34:06 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3330 writes, 14K keys, 3330 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 3330 writes, 3330 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1282 writes, 5825 keys, 1282 commit groups, 1.0 writes per commit group, ingest: 8.61 MB, 0.01 MB/s#012Interval WAL: 1282 writes, 1282 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     99.0      0.16              0.04         7    0.023       0      0       0.0       0.0#012  L6      1/0    6.97 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    127.1    104.6      0.40              0.11         6    0.066     24K   3207       0.0       0.0#012 Sum      1/0    6.97 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.6     90.5    103.0      0.56              0.15        13    0.043     24K   3207       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.7    120.0    121.4      0.29              0.09         8    0.036     17K   2470       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    127.1    104.6      0.40              0.11         6    0.066     24K   3207       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    101.1      0.16              0.04         6    0.026       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.016, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.06 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.6 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56349f89b8d0#2 capacity: 308.00 MB usage: 1.94 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000123 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(107,1.72 MB,0.557154%) FilterBlock(14,75.86 KB,0.0240524%) IndexBlock(14,149.05 KB,0.0472577%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  4 05:34:07 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v698: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:34:08 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:34:09 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v699: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:34:11 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v700: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:34:13 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v701: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:34:13 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:34:15 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v702: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:34:17 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v703: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:34:18 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:34:19 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v704: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:34:19 np0005545273 podman[245622]: 2025-12-04 10:34:19.972190979 +0000 UTC m=+0.072514097 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  4 05:34:21 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v705: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:34:23 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v706: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:34:23 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:34:25 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v707: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:34:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:34:26
Dec  4 05:34:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:34:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:34:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['vms', 'images', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'volumes', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups']
Dec  4 05:34:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:34:27 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v708: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:34:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:34:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:34:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:34:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:34:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:34:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:34:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:34:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:34:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:34:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:34:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:34:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:34:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:34:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:34:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:34:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:34:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:34:28 np0005545273 podman[245642]: 2025-12-04 10:34:28.973469603 +0000 UTC m=+0.073839129 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  4 05:34:29 np0005545273 podman[245669]: 2025-12-04 10:34:29.061007462 +0000 UTC m=+0.059177613 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  4 05:34:29 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v709: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:34:31 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v710: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:34:33 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v711: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:34:33 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:34:33 np0005545273 nova_compute[244644]: 2025-12-04 10:34:33.744 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:34:33 np0005545273 nova_compute[244644]: 2025-12-04 10:34:33.897 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:34:35 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v712: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:34:36 np0005545273 nova_compute[244644]: 2025-12-04 10:34:36.340 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:34:36 np0005545273 nova_compute[244644]: 2025-12-04 10:34:36.341 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:34:36 np0005545273 nova_compute[244644]: 2025-12-04 10:34:36.341 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  4 05:34:36 np0005545273 nova_compute[244644]: 2025-12-04 10:34:36.341 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  4 05:34:36 np0005545273 nova_compute[244644]: 2025-12-04 10:34:36.363 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  4 05:34:36 np0005545273 nova_compute[244644]: 2025-12-04 10:34:36.363 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:34:36 np0005545273 nova_compute[244644]: 2025-12-04 10:34:36.364 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:34:36 np0005545273 nova_compute[244644]: 2025-12-04 10:34:36.364 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:34:36 np0005545273 nova_compute[244644]: 2025-12-04 10:34:36.364 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:34:36 np0005545273 nova_compute[244644]: 2025-12-04 10:34:36.364 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:34:36 np0005545273 nova_compute[244644]: 2025-12-04 10:34:36.364 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:34:36 np0005545273 nova_compute[244644]: 2025-12-04 10:34:36.365 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  4 05:34:36 np0005545273 nova_compute[244644]: 2025-12-04 10:34:36.365 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:34:36 np0005545273 nova_compute[244644]: 2025-12-04 10:34:36.423 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:34:36 np0005545273 nova_compute[244644]: 2025-12-04 10:34:36.424 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:34:36 np0005545273 nova_compute[244644]: 2025-12-04 10:34:36.424 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:34:36 np0005545273 nova_compute[244644]: 2025-12-04 10:34:36.424 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  4 05:34:36 np0005545273 nova_compute[244644]: 2025-12-04 10:34:36.425 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:34:36 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:34:36 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3945567612' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:34:36 np0005545273 nova_compute[244644]: 2025-12-04 10:34:36.979 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:34:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:34:37 np0005545273 nova_compute[244644]: 2025-12-04 10:34:37.134 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  4 05:34:37 np0005545273 nova_compute[244644]: 2025-12-04 10:34:37.136 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5148MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  4 05:34:37 np0005545273 nova_compute[244644]: 2025-12-04 10:34:37.136 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:34:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:34:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:34:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:34:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:34:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:34:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:34:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:34:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:34:37 np0005545273 nova_compute[244644]: 2025-12-04 10:34:37.136 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:34:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:34:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:34:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:34:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec  4 05:34:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:34:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:34:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:34:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:34:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:34:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 05:34:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:34:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:34:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:34:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 05:34:37 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v713: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:34:37 np0005545273 nova_compute[244644]: 2025-12-04 10:34:37.230 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  4 05:34:37 np0005545273 nova_compute[244644]: 2025-12-04 10:34:37.230 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  4 05:34:37 np0005545273 nova_compute[244644]: 2025-12-04 10:34:37.254 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:34:37 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:34:37 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2927827360' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:34:37 np0005545273 nova_compute[244644]: 2025-12-04 10:34:37.755 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:34:37 np0005545273 nova_compute[244644]: 2025-12-04 10:34:37.762 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  4 05:34:37 np0005545273 nova_compute[244644]: 2025-12-04 10:34:37.779 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  4 05:34:37 np0005545273 nova_compute[244644]: 2025-12-04 10:34:37.822 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  4 05:34:37 np0005545273 nova_compute[244644]: 2025-12-04 10:34:37.823 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:34:38 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:34:39 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v714: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:34:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:34:39 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:34:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:34:39 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:34:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:34:39 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:34:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:34:39 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:34:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:34:39 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:34:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:34:39 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:34:40 np0005545273 podman[245876]: 2025-12-04 10:34:40.322584476 +0000 UTC m=+0.088365719 container create e5a2e572cb2635d243fb523f3997ac0a791d621879f6d904156a2514e6baf86c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  4 05:34:40 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:34:40 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:34:40 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:34:40 np0005545273 podman[245876]: 2025-12-04 10:34:40.257065636 +0000 UTC m=+0.022846899 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:34:40 np0005545273 systemd[1]: Started libpod-conmon-e5a2e572cb2635d243fb523f3997ac0a791d621879f6d904156a2514e6baf86c.scope.
Dec  4 05:34:40 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:34:40 np0005545273 podman[245876]: 2025-12-04 10:34:40.44282606 +0000 UTC m=+0.208607303 container init e5a2e572cb2635d243fb523f3997ac0a791d621879f6d904156a2514e6baf86c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  4 05:34:40 np0005545273 podman[245876]: 2025-12-04 10:34:40.450644141 +0000 UTC m=+0.216425384 container start e5a2e572cb2635d243fb523f3997ac0a791d621879f6d904156a2514e6baf86c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_boyd, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:34:40 np0005545273 podman[245876]: 2025-12-04 10:34:40.453961922 +0000 UTC m=+0.219743185 container attach e5a2e572cb2635d243fb523f3997ac0a791d621879f6d904156a2514e6baf86c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_boyd, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  4 05:34:40 np0005545273 friendly_boyd[245892]: 167 167
Dec  4 05:34:40 np0005545273 systemd[1]: libpod-e5a2e572cb2635d243fb523f3997ac0a791d621879f6d904156a2514e6baf86c.scope: Deactivated successfully.
Dec  4 05:34:40 np0005545273 podman[245876]: 2025-12-04 10:34:40.456313739 +0000 UTC m=+0.222094982 container died e5a2e572cb2635d243fb523f3997ac0a791d621879f6d904156a2514e6baf86c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_boyd, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:34:40 np0005545273 systemd[1]: var-lib-containers-storage-overlay-c4676ef9522015c91dc60851711e8f736a71ce5ce6c9e3d206caf238df124b4d-merged.mount: Deactivated successfully.
Dec  4 05:34:40 np0005545273 podman[245876]: 2025-12-04 10:34:40.642516945 +0000 UTC m=+0.408298188 container remove e5a2e572cb2635d243fb523f3997ac0a791d621879f6d904156a2514e6baf86c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_boyd, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:34:40 np0005545273 systemd[1]: libpod-conmon-e5a2e572cb2635d243fb523f3997ac0a791d621879f6d904156a2514e6baf86c.scope: Deactivated successfully.
Dec  4 05:34:40 np0005545273 podman[245914]: 2025-12-04 10:34:40.819600918 +0000 UTC m=+0.062838775 container create 0c3c7afb8e9497d13c46d7cba85dd12710c78397e6610144462297756b6c1cae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  4 05:34:40 np0005545273 systemd[1]: Started libpod-conmon-0c3c7afb8e9497d13c46d7cba85dd12710c78397e6610144462297756b6c1cae.scope.
Dec  4 05:34:40 np0005545273 podman[245914]: 2025-12-04 10:34:40.777728835 +0000 UTC m=+0.020966722 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:34:40 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:34:40 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf109b8bd05f7f8e41db212f5d8dee9c152d21c8aeb79d1d93143d20783085c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:34:40 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf109b8bd05f7f8e41db212f5d8dee9c152d21c8aeb79d1d93143d20783085c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:34:40 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf109b8bd05f7f8e41db212f5d8dee9c152d21c8aeb79d1d93143d20783085c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:34:40 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf109b8bd05f7f8e41db212f5d8dee9c152d21c8aeb79d1d93143d20783085c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:34:40 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf109b8bd05f7f8e41db212f5d8dee9c152d21c8aeb79d1d93143d20783085c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:34:40 np0005545273 podman[245914]: 2025-12-04 10:34:40.893354467 +0000 UTC m=+0.136592354 container init 0c3c7afb8e9497d13c46d7cba85dd12710c78397e6610144462297756b6c1cae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_villani, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:34:40 np0005545273 podman[245914]: 2025-12-04 10:34:40.903335911 +0000 UTC m=+0.146573788 container start 0c3c7afb8e9497d13c46d7cba85dd12710c78397e6610144462297756b6c1cae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_villani, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec  4 05:34:40 np0005545273 podman[245914]: 2025-12-04 10:34:40.919209319 +0000 UTC m=+0.162447206 container attach 0c3c7afb8e9497d13c46d7cba85dd12710c78397e6610144462297756b6c1cae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:34:41 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v715: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:34:41 np0005545273 clever_villani[245931]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:34:41 np0005545273 clever_villani[245931]: --> All data devices are unavailable
Dec  4 05:34:41 np0005545273 systemd[1]: libpod-0c3c7afb8e9497d13c46d7cba85dd12710c78397e6610144462297756b6c1cae.scope: Deactivated successfully.
Dec  4 05:34:41 np0005545273 podman[245914]: 2025-12-04 10:34:41.48622721 +0000 UTC m=+0.729465087 container died 0c3c7afb8e9497d13c46d7cba85dd12710c78397e6610144462297756b6c1cae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_villani, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:34:41 np0005545273 systemd[1]: var-lib-containers-storage-overlay-eaf109b8bd05f7f8e41db212f5d8dee9c152d21c8aeb79d1d93143d20783085c-merged.mount: Deactivated successfully.
Dec  4 05:34:41 np0005545273 podman[245914]: 2025-12-04 10:34:41.571387539 +0000 UTC m=+0.814625396 container remove 0c3c7afb8e9497d13c46d7cba85dd12710c78397e6610144462297756b6c1cae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_villani, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030)
Dec  4 05:34:41 np0005545273 systemd[1]: libpod-conmon-0c3c7afb8e9497d13c46d7cba85dd12710c78397e6610144462297756b6c1cae.scope: Deactivated successfully.
Dec  4 05:34:42 np0005545273 podman[246028]: 2025-12-04 10:34:42.012125946 +0000 UTC m=+0.037386963 container create 92c8e03f14c0c63de04d04c8a3a341c9b90e55690b51b690c962820e28cb8563 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_keller, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:34:42 np0005545273 podman[246028]: 2025-12-04 10:34:41.996397722 +0000 UTC m=+0.021658769 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:34:42 np0005545273 systemd[1]: Started libpod-conmon-92c8e03f14c0c63de04d04c8a3a341c9b90e55690b51b690c962820e28cb8563.scope.
Dec  4 05:34:42 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:34:42 np0005545273 podman[246028]: 2025-12-04 10:34:42.155594518 +0000 UTC m=+0.180855555 container init 92c8e03f14c0c63de04d04c8a3a341c9b90e55690b51b690c962820e28cb8563 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_keller, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  4 05:34:42 np0005545273 podman[246028]: 2025-12-04 10:34:42.163211414 +0000 UTC m=+0.188472431 container start 92c8e03f14c0c63de04d04c8a3a341c9b90e55690b51b690c962820e28cb8563 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_keller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:34:42 np0005545273 podman[246028]: 2025-12-04 10:34:42.166640188 +0000 UTC m=+0.191901315 container attach 92c8e03f14c0c63de04d04c8a3a341c9b90e55690b51b690c962820e28cb8563 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_keller, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:34:42 np0005545273 heuristic_keller[246044]: 167 167
Dec  4 05:34:42 np0005545273 systemd[1]: libpod-92c8e03f14c0c63de04d04c8a3a341c9b90e55690b51b690c962820e28cb8563.scope: Deactivated successfully.
Dec  4 05:34:42 np0005545273 podman[246028]: 2025-12-04 10:34:42.170062781 +0000 UTC m=+0.195323808 container died 92c8e03f14c0c63de04d04c8a3a341c9b90e55690b51b690c962820e28cb8563 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_keller, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:34:42 np0005545273 systemd[1]: var-lib-containers-storage-overlay-81c1430130fb0d1731963a1059f1fe03fae10a323c0ede3d184c596742943de6-merged.mount: Deactivated successfully.
Dec  4 05:34:42 np0005545273 podman[246028]: 2025-12-04 10:34:42.315481371 +0000 UTC m=+0.340742388 container remove 92c8e03f14c0c63de04d04c8a3a341c9b90e55690b51b690c962820e28cb8563 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_keller, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:34:42 np0005545273 systemd[1]: libpod-conmon-92c8e03f14c0c63de04d04c8a3a341c9b90e55690b51b690c962820e28cb8563.scope: Deactivated successfully.
Dec  4 05:34:42 np0005545273 podman[246067]: 2025-12-04 10:34:42.542897902 +0000 UTC m=+0.111307798 container create 520a3061645cbd7fc52c22b7bc00095519889de9dd004a40256655d4ad2f7542 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_meninsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:34:42 np0005545273 podman[246067]: 2025-12-04 10:34:42.455576511 +0000 UTC m=+0.023986457 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:34:42 np0005545273 systemd[1]: Started libpod-conmon-520a3061645cbd7fc52c22b7bc00095519889de9dd004a40256655d4ad2f7542.scope.
Dec  4 05:34:42 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:34:42 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c919580049fec82fc361fa2e3b5aa25aa47d7145af2ab225b4d4c5a3e6c02d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:34:42 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c919580049fec82fc361fa2e3b5aa25aa47d7145af2ab225b4d4c5a3e6c02d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:34:42 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c919580049fec82fc361fa2e3b5aa25aa47d7145af2ab225b4d4c5a3e6c02d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:34:42 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c919580049fec82fc361fa2e3b5aa25aa47d7145af2ab225b4d4c5a3e6c02d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:34:42 np0005545273 podman[246067]: 2025-12-04 10:34:42.645388844 +0000 UTC m=+0.213798790 container init 520a3061645cbd7fc52c22b7bc00095519889de9dd004a40256655d4ad2f7542 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_meninsky, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True)
Dec  4 05:34:42 np0005545273 podman[246067]: 2025-12-04 10:34:42.652658551 +0000 UTC m=+0.221068447 container start 520a3061645cbd7fc52c22b7bc00095519889de9dd004a40256655d4ad2f7542 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_meninsky, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:34:42 np0005545273 podman[246067]: 2025-12-04 10:34:42.679314972 +0000 UTC m=+0.247724888 container attach 520a3061645cbd7fc52c22b7bc00095519889de9dd004a40256655d4ad2f7542 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]: {
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:    "0": [
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:        {
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            "devices": [
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "/dev/loop3"
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            ],
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            "lv_name": "ceph_lv0",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            "lv_size": "21470642176",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            "name": "ceph_lv0",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            "tags": {
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.cluster_name": "ceph",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.crush_device_class": "",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.encrypted": "0",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.objectstore": "bluestore",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.osd_id": "0",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.type": "block",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.vdo": "0",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.with_tpm": "0"
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            },
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            "type": "block",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            "vg_name": "ceph_vg0"
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:        }
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:    ],
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:    "1": [
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:        {
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            "devices": [
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "/dev/loop4"
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            ],
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            "lv_name": "ceph_lv1",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            "lv_size": "21470642176",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            "name": "ceph_lv1",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            "tags": {
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.cluster_name": "ceph",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.crush_device_class": "",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.encrypted": "0",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.objectstore": "bluestore",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.osd_id": "1",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.type": "block",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.vdo": "0",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.with_tpm": "0"
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            },
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            "type": "block",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            "vg_name": "ceph_vg1"
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:        }
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:    ],
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:    "2": [
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:        {
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            "devices": [
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "/dev/loop5"
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            ],
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            "lv_name": "ceph_lv2",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            "lv_size": "21470642176",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            "name": "ceph_lv2",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            "tags": {
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.cluster_name": "ceph",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.crush_device_class": "",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.encrypted": "0",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.objectstore": "bluestore",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.osd_id": "2",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.type": "block",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.vdo": "0",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:                "ceph.with_tpm": "0"
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            },
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            "type": "block",
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:            "vg_name": "ceph_vg2"
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:        }
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]:    ]
Dec  4 05:34:42 np0005545273 magical_meninsky[246084]: }
Dec  4 05:34:42 np0005545273 systemd[1]: libpod-520a3061645cbd7fc52c22b7bc00095519889de9dd004a40256655d4ad2f7542.scope: Deactivated successfully.
Dec  4 05:34:42 np0005545273 podman[246093]: 2025-12-04 10:34:42.983834074 +0000 UTC m=+0.023236507 container died 520a3061645cbd7fc52c22b7bc00095519889de9dd004a40256655d4ad2f7542 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  4 05:34:43 np0005545273 systemd[1]: var-lib-containers-storage-overlay-2c919580049fec82fc361fa2e3b5aa25aa47d7145af2ab225b4d4c5a3e6c02d4-merged.mount: Deactivated successfully.
Dec  4 05:34:43 np0005545273 podman[246093]: 2025-12-04 10:34:43.171768213 +0000 UTC m=+0.211170636 container remove 520a3061645cbd7fc52c22b7bc00095519889de9dd004a40256655d4ad2f7542 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec  4 05:34:43 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v716: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:34:43 np0005545273 systemd[1]: libpod-conmon-520a3061645cbd7fc52c22b7bc00095519889de9dd004a40256655d4ad2f7542.scope: Deactivated successfully.
Dec  4 05:34:43 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:34:43 np0005545273 podman[246171]: 2025-12-04 10:34:43.732309375 +0000 UTC m=+0.055936367 container create 78ac4482360f3a6aa69270e66df0473e4264e2c5531eb25eff8b39ca60f8cd7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cohen, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:34:43 np0005545273 systemd[1]: Started libpod-conmon-78ac4482360f3a6aa69270e66df0473e4264e2c5531eb25eff8b39ca60f8cd7f.scope.
Dec  4 05:34:43 np0005545273 podman[246171]: 2025-12-04 10:34:43.706278619 +0000 UTC m=+0.029905651 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:34:43 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:34:43 np0005545273 podman[246171]: 2025-12-04 10:34:43.817881494 +0000 UTC m=+0.141508486 container init 78ac4482360f3a6aa69270e66df0473e4264e2c5531eb25eff8b39ca60f8cd7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec  4 05:34:43 np0005545273 podman[246171]: 2025-12-04 10:34:43.824061894 +0000 UTC m=+0.147688876 container start 78ac4482360f3a6aa69270e66df0473e4264e2c5531eb25eff8b39ca60f8cd7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec  4 05:34:43 np0005545273 podman[246171]: 2025-12-04 10:34:43.827811096 +0000 UTC m=+0.151438098 container attach 78ac4482360f3a6aa69270e66df0473e4264e2c5531eb25eff8b39ca60f8cd7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cohen, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:34:43 np0005545273 systemd[1]: libpod-78ac4482360f3a6aa69270e66df0473e4264e2c5531eb25eff8b39ca60f8cd7f.scope: Deactivated successfully.
Dec  4 05:34:43 np0005545273 flamboyant_cohen[246187]: 167 167
Dec  4 05:34:43 np0005545273 conmon[246187]: conmon 78ac4482360f3a6aa692 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-78ac4482360f3a6aa69270e66df0473e4264e2c5531eb25eff8b39ca60f8cd7f.scope/container/memory.events
Dec  4 05:34:43 np0005545273 podman[246171]: 2025-12-04 10:34:43.83041806 +0000 UTC m=+0.154045042 container died 78ac4482360f3a6aa69270e66df0473e4264e2c5531eb25eff8b39ca60f8cd7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cohen, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec  4 05:34:43 np0005545273 systemd[1]: var-lib-containers-storage-overlay-8f4f3d71c672d555358757da4660ad9f389078d9ab4d5f7f0a9a8355b9e940b6-merged.mount: Deactivated successfully.
Dec  4 05:34:43 np0005545273 podman[246171]: 2025-12-04 10:34:43.865874975 +0000 UTC m=+0.189501947 container remove 78ac4482360f3a6aa69270e66df0473e4264e2c5531eb25eff8b39ca60f8cd7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cohen, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  4 05:34:43 np0005545273 systemd[1]: libpod-conmon-78ac4482360f3a6aa69270e66df0473e4264e2c5531eb25eff8b39ca60f8cd7f.scope: Deactivated successfully.
Dec  4 05:34:44 np0005545273 podman[246211]: 2025-12-04 10:34:44.024169939 +0000 UTC m=+0.042592721 container create 8dfccf13d6dfcf272f8f26eb7b47666977dab12b00fc754cf2e50c1f004b822b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_sutherland, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  4 05:34:44 np0005545273 systemd[1]: Started libpod-conmon-8dfccf13d6dfcf272f8f26eb7b47666977dab12b00fc754cf2e50c1f004b822b.scope.
Dec  4 05:34:44 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:34:44 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5bc5317535d67956a74996fb6faecdf119833c4fed1eac78cee0a1de38aafa2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:34:44 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5bc5317535d67956a74996fb6faecdf119833c4fed1eac78cee0a1de38aafa2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:34:44 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5bc5317535d67956a74996fb6faecdf119833c4fed1eac78cee0a1de38aafa2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:34:44 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5bc5317535d67956a74996fb6faecdf119833c4fed1eac78cee0a1de38aafa2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:34:44 np0005545273 podman[246211]: 2025-12-04 10:34:44.005978835 +0000 UTC m=+0.024401647 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:34:44 np0005545273 podman[246211]: 2025-12-04 10:34:44.101744633 +0000 UTC m=+0.120167435 container init 8dfccf13d6dfcf272f8f26eb7b47666977dab12b00fc754cf2e50c1f004b822b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:34:44 np0005545273 podman[246211]: 2025-12-04 10:34:44.110352603 +0000 UTC m=+0.128775385 container start 8dfccf13d6dfcf272f8f26eb7b47666977dab12b00fc754cf2e50c1f004b822b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_sutherland, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:34:44 np0005545273 podman[246211]: 2025-12-04 10:34:44.114628837 +0000 UTC m=+0.133051649 container attach 8dfccf13d6dfcf272f8f26eb7b47666977dab12b00fc754cf2e50c1f004b822b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_sutherland, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec  4 05:34:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Dec  4 05:34:44 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4092581525' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Dec  4 05:34:44 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec  4 05:34:44 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  4 05:34:44 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  4 05:34:44 np0005545273 lvm[246306]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:34:44 np0005545273 lvm[246307]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:34:44 np0005545273 lvm[246307]: VG ceph_vg1 finished
Dec  4 05:34:44 np0005545273 lvm[246306]: VG ceph_vg0 finished
Dec  4 05:34:44 np0005545273 lvm[246309]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:34:44 np0005545273 lvm[246309]: VG ceph_vg2 finished
Dec  4 05:34:44 np0005545273 eager_sutherland[246228]: {}
Dec  4 05:34:45 np0005545273 systemd[1]: libpod-8dfccf13d6dfcf272f8f26eb7b47666977dab12b00fc754cf2e50c1f004b822b.scope: Deactivated successfully.
Dec  4 05:34:45 np0005545273 systemd[1]: libpod-8dfccf13d6dfcf272f8f26eb7b47666977dab12b00fc754cf2e50c1f004b822b.scope: Consumed 1.447s CPU time.
Dec  4 05:34:45 np0005545273 podman[246211]: 2025-12-04 10:34:45.024936227 +0000 UTC m=+1.043359029 container died 8dfccf13d6dfcf272f8f26eb7b47666977dab12b00fc754cf2e50c1f004b822b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:34:45 np0005545273 systemd[1]: var-lib-containers-storage-overlay-b5bc5317535d67956a74996fb6faecdf119833c4fed1eac78cee0a1de38aafa2-merged.mount: Deactivated successfully.
Dec  4 05:34:45 np0005545273 podman[246211]: 2025-12-04 10:34:45.074303611 +0000 UTC m=+1.092726393 container remove 8dfccf13d6dfcf272f8f26eb7b47666977dab12b00fc754cf2e50c1f004b822b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_sutherland, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:34:45 np0005545273 systemd[1]: libpod-conmon-8dfccf13d6dfcf272f8f26eb7b47666977dab12b00fc754cf2e50c1f004b822b.scope: Deactivated successfully.
Dec  4 05:34:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:34:45 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:34:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:34:45 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:34:45 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v717: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:34:45 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:34:45 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:34:47 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v718: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:34:48 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:34:49 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v719: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:34:50 np0005545273 ceph-osd[88205]: bluestore.MempoolThread fragmentation_score=0.000134 took=0.000054s
Dec  4 05:34:50 np0005545273 ceph-osd[86021]: bluestore.MempoolThread fragmentation_score=0.000116 took=0.000017s
Dec  4 05:34:50 np0005545273 ceph-osd[87071]: bluestore.MempoolThread fragmentation_score=0.000141 took=0.000037s
Dec  4 05:34:50 np0005545273 podman[246351]: 2025-12-04 10:34:50.995243287 +0000 UTC m=+0.093991255 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:34:51 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v720: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:34:53 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v721: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:34:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:34:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:34:54.900 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:34:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:34:54.901 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:34:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:34:54.901 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:34:55 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v722: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:34:57 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v723: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:34:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:34:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:34:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:34:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:34:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:34:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:34:58 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:34:59 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v724: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:34:59 np0005545273 podman[246374]: 2025-12-04 10:34:59.945950278 +0000 UTC m=+0.051998080 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  4 05:35:00 np0005545273 podman[246373]: 2025-12-04 10:35:00.02550507 +0000 UTC m=+0.133335306 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec  4 05:35:01 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v725: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:35:03 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v726: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:35:03 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:35:05 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v727: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:35:07 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v728: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:35:08 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Dec  4 05:35:08 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Dec  4 05:35:08 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec  4 05:35:08 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  4 05:35:08 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  4 05:35:08 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:35:09 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v729: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:35:11 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v730: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:35:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  4 05:35:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3045903362' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec  4 05:35:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  4 05:35:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3045903362' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec  4 05:35:13 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v731: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:35:13 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:35:15 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v732: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:35:17 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v733: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:35:18 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:35:19 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v734: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:35:21 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v735: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:35:21 np0005545273 podman[246420]: 2025-12-04 10:35:21.957621348 +0000 UTC m=+0.058056118 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd)
Dec  4 05:35:22 np0005545273 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  4 05:35:22 np0005545273 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5692 writes, 24K keys, 5692 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5692 writes, 915 syncs, 6.22 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s#012Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5611613a38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Dec  4 05:35:23 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v736: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:35:23 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:35:25 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v737: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:35:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:35:26
Dec  4 05:35:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:35:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:35:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', 'default.rgw.log', 'images', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'vms']
Dec  4 05:35:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:35:27 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v738: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:35:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:35:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:35:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:35:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:35:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:35:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:35:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:35:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:35:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:35:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:35:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:35:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:35:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:35:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:35:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:35:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:35:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:35:29 np0005545273 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  4 05:35:29 np0005545273 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1201.0 total, 600.0 interval#012Cumulative writes: 7142 writes, 28K keys, 7142 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 7142 writes, 1395 syncs, 5.12 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.181       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.181       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.18              0.00         1    0.181       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1201.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1201.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1201.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Dec  4 05:35:29 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v739: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:35:30 np0005545273 podman[246444]: 2025-12-04 10:35:30.035899993 +0000 UTC m=+0.050724520 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  4 05:35:30 np0005545273 podman[246463]: 2025-12-04 10:35:30.145027606 +0000 UTC m=+0.082440223 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  4 05:35:31 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v740: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:35:33 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v741: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:35:33 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:35:35 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v742: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:35:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:35:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:35:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:35:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:35:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:35:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:35:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:35:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:35:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:35:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:35:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:35:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:35:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec  4 05:35:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:35:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:35:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:35:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:35:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:35:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 05:35:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:35:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:35:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:35:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 05:35:37 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v743: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:35:37 np0005545273 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  4 05:35:37 np0005545273 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5703 writes, 24K keys, 5703 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5703 writes, 902 syncs, 6.32 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowd
Dec  4 05:35:37 np0005545273 nova_compute[244644]: 2025-12-04 10:35:37.816 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:35:37 np0005545273 nova_compute[244644]: 2025-12-04 10:35:37.817 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:35:37 np0005545273 nova_compute[244644]: 2025-12-04 10:35:37.844 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:35:37 np0005545273 nova_compute[244644]: 2025-12-04 10:35:37.844 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  4 05:35:37 np0005545273 nova_compute[244644]: 2025-12-04 10:35:37.844 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  4 05:35:37 np0005545273 nova_compute[244644]: 2025-12-04 10:35:37.857 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  4 05:35:37 np0005545273 nova_compute[244644]: 2025-12-04 10:35:37.858 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:35:37 np0005545273 nova_compute[244644]: 2025-12-04 10:35:37.858 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:35:37 np0005545273 nova_compute[244644]: 2025-12-04 10:35:37.858 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:35:37 np0005545273 nova_compute[244644]: 2025-12-04 10:35:37.858 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:35:37 np0005545273 nova_compute[244644]: 2025-12-04 10:35:37.858 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:35:37 np0005545273 nova_compute[244644]: 2025-12-04 10:35:37.859 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:35:37 np0005545273 nova_compute[244644]: 2025-12-04 10:35:37.894 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:35:37 np0005545273 nova_compute[244644]: 2025-12-04 10:35:37.895 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:35:37 np0005545273 nova_compute[244644]: 2025-12-04 10:35:37.895 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:35:37 np0005545273 nova_compute[244644]: 2025-12-04 10:35:37.895 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  4 05:35:37 np0005545273 nova_compute[244644]: 2025-12-04 10:35:37.896 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:35:38 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:35:38 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/989021705' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:35:38 np0005545273 nova_compute[244644]: 2025-12-04 10:35:38.519 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.623s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:35:38 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:35:38 np0005545273 nova_compute[244644]: 2025-12-04 10:35:38.708 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  4 05:35:38 np0005545273 nova_compute[244644]: 2025-12-04 10:35:38.710 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5175MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  4 05:35:38 np0005545273 nova_compute[244644]: 2025-12-04 10:35:38.710 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:35:38 np0005545273 nova_compute[244644]: 2025-12-04 10:35:38.711 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:35:38 np0005545273 nova_compute[244644]: 2025-12-04 10:35:38.798 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  4 05:35:38 np0005545273 nova_compute[244644]: 2025-12-04 10:35:38.799 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  4 05:35:38 np0005545273 nova_compute[244644]: 2025-12-04 10:35:38.815 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:35:39 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v744: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:35:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:35:39 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2364954019' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:35:39 np0005545273 nova_compute[244644]: 2025-12-04 10:35:39.397 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:35:39 np0005545273 nova_compute[244644]: 2025-12-04 10:35:39.403 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  4 05:35:39 np0005545273 nova_compute[244644]: 2025-12-04 10:35:39.431 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  4 05:35:39 np0005545273 nova_compute[244644]: 2025-12-04 10:35:39.433 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  4 05:35:39 np0005545273 nova_compute[244644]: 2025-12-04 10:35:39.433 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:35:39 np0005545273 nova_compute[244644]: 2025-12-04 10:35:39.912 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:35:39 np0005545273 nova_compute[244644]: 2025-12-04 10:35:39.913 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  4 05:35:41 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v745: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:35:43 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v746: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:35:43 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:35:43 np0005545273 ceph-mgr[75651]: [devicehealth INFO root] Check health
Dec  4 05:35:45 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v747: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:35:45 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:35:45.526 156095 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'aa:78:67', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:d2:c7:24:ee:78'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  4 05:35:45 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:35:45.527 156095 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  4 05:35:45 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:35:45.527 156095 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=565580d5-3422-4e11-b563-3f1a3db67238, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  4 05:35:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:35:45 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:35:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:35:45 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:35:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:35:45 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:35:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:35:45 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:35:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:35:45 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:35:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:35:45 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:35:46 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:35:46 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:35:46 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:35:46 np0005545273 podman[246679]: 2025-12-04 10:35:46.414596176 +0000 UTC m=+0.025144054 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:35:46 np0005545273 podman[246679]: 2025-12-04 10:35:46.918549407 +0000 UTC m=+0.529097265 container create 2fbc85150ed9b325b2df90bb0069c7461838d755e7ce9a1306ce830a17a1f1d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  4 05:35:47 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v748: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:35:47 np0005545273 systemd[1]: Started libpod-conmon-2fbc85150ed9b325b2df90bb0069c7461838d755e7ce9a1306ce830a17a1f1d7.scope.
Dec  4 05:35:48 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:35:48 np0005545273 podman[246679]: 2025-12-04 10:35:48.13255933 +0000 UTC m=+1.743107248 container init 2fbc85150ed9b325b2df90bb0069c7461838d755e7ce9a1306ce830a17a1f1d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_feistel, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  4 05:35:48 np0005545273 podman[246679]: 2025-12-04 10:35:48.14361229 +0000 UTC m=+1.754160168 container start 2fbc85150ed9b325b2df90bb0069c7461838d755e7ce9a1306ce830a17a1f1d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_feistel, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:35:48 np0005545273 podman[246679]: 2025-12-04 10:35:48.149268648 +0000 UTC m=+1.759816526 container attach 2fbc85150ed9b325b2df90bb0069c7461838d755e7ce9a1306ce830a17a1f1d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_feistel, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  4 05:35:48 np0005545273 gifted_feistel[246695]: 167 167
Dec  4 05:35:48 np0005545273 systemd[1]: libpod-2fbc85150ed9b325b2df90bb0069c7461838d755e7ce9a1306ce830a17a1f1d7.scope: Deactivated successfully.
Dec  4 05:35:48 np0005545273 podman[246679]: 2025-12-04 10:35:48.15632471 +0000 UTC m=+1.766872568 container died 2fbc85150ed9b325b2df90bb0069c7461838d755e7ce9a1306ce830a17a1f1d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_feistel, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Dec  4 05:35:48 np0005545273 systemd[1]: var-lib-containers-storage-overlay-c0173c65e0fb61891059b10a91e08ecae8bdeab822458df5dc0161b530f0aa57-merged.mount: Deactivated successfully.
Dec  4 05:35:48 np0005545273 podman[246679]: 2025-12-04 10:35:48.207125721 +0000 UTC m=+1.817673579 container remove 2fbc85150ed9b325b2df90bb0069c7461838d755e7ce9a1306ce830a17a1f1d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_feistel, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:35:48 np0005545273 systemd[1]: libpod-conmon-2fbc85150ed9b325b2df90bb0069c7461838d755e7ce9a1306ce830a17a1f1d7.scope: Deactivated successfully.
Dec  4 05:35:48 np0005545273 podman[246717]: 2025-12-04 10:35:48.391496191 +0000 UTC m=+0.050537655 container create ec8ec4a5d8727d88820c25f97f3cd037e5927478494051c4dc08e237fc935b2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_grothendieck, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec  4 05:35:48 np0005545273 systemd[1]: Started libpod-conmon-ec8ec4a5d8727d88820c25f97f3cd037e5927478494051c4dc08e237fc935b2d.scope.
Dec  4 05:35:48 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:35:48 np0005545273 podman[246717]: 2025-12-04 10:35:48.370737955 +0000 UTC m=+0.029779449 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:35:48 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a2c67d7ee6a6d7bc4a49844fafb81f5426ba088c303fdee0b78fec01d4bb40e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:35:48 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a2c67d7ee6a6d7bc4a49844fafb81f5426ba088c303fdee0b78fec01d4bb40e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:35:48 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a2c67d7ee6a6d7bc4a49844fafb81f5426ba088c303fdee0b78fec01d4bb40e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:35:48 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a2c67d7ee6a6d7bc4a49844fafb81f5426ba088c303fdee0b78fec01d4bb40e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:35:48 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a2c67d7ee6a6d7bc4a49844fafb81f5426ba088c303fdee0b78fec01d4bb40e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:35:48 np0005545273 podman[246717]: 2025-12-04 10:35:48.476470875 +0000 UTC m=+0.135512359 container init ec8ec4a5d8727d88820c25f97f3cd037e5927478494051c4dc08e237fc935b2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:35:48 np0005545273 podman[246717]: 2025-12-04 10:35:48.488568431 +0000 UTC m=+0.147609895 container start ec8ec4a5d8727d88820c25f97f3cd037e5927478494051c4dc08e237fc935b2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_grothendieck, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:35:48 np0005545273 podman[246717]: 2025-12-04 10:35:48.57948631 +0000 UTC m=+0.238527774 container attach ec8ec4a5d8727d88820c25f97f3cd037e5927478494051c4dc08e237fc935b2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:35:48 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:35:48 np0005545273 charming_grothendieck[246734]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:35:48 np0005545273 charming_grothendieck[246734]: --> All data devices are unavailable
Dec  4 05:35:49 np0005545273 systemd[1]: libpod-ec8ec4a5d8727d88820c25f97f3cd037e5927478494051c4dc08e237fc935b2d.scope: Deactivated successfully.
Dec  4 05:35:49 np0005545273 podman[246754]: 2025-12-04 10:35:49.06863101 +0000 UTC m=+0.026060488 container died ec8ec4a5d8727d88820c25f97f3cd037e5927478494051c4dc08e237fc935b2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec  4 05:35:49 np0005545273 systemd[1]: var-lib-containers-storage-overlay-5a2c67d7ee6a6d7bc4a49844fafb81f5426ba088c303fdee0b78fec01d4bb40e-merged.mount: Deactivated successfully.
Dec  4 05:35:49 np0005545273 podman[246754]: 2025-12-04 10:35:49.111044385 +0000 UTC m=+0.068473833 container remove ec8ec4a5d8727d88820c25f97f3cd037e5927478494051c4dc08e237fc935b2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec  4 05:35:49 np0005545273 systemd[1]: libpod-conmon-ec8ec4a5d8727d88820c25f97f3cd037e5927478494051c4dc08e237fc935b2d.scope: Deactivated successfully.
Dec  4 05:35:49 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v749: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:35:49 np0005545273 podman[246832]: 2025-12-04 10:35:49.601457696 +0000 UTC m=+0.046232339 container create de6fcc1e329d024a64644bacb6d5c91d972df1267c0cdc85e8e4cd0f71d7df58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:35:49 np0005545273 systemd[1]: Started libpod-conmon-de6fcc1e329d024a64644bacb6d5c91d972df1267c0cdc85e8e4cd0f71d7df58.scope.
Dec  4 05:35:49 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:35:49 np0005545273 podman[246832]: 2025-12-04 10:35:49.579794657 +0000 UTC m=+0.024569360 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:35:49 np0005545273 podman[246832]: 2025-12-04 10:35:49.72987429 +0000 UTC m=+0.174648963 container init de6fcc1e329d024a64644bacb6d5c91d972df1267c0cdc85e8e4cd0f71d7df58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_ganguly, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:35:49 np0005545273 podman[246832]: 2025-12-04 10:35:49.737968908 +0000 UTC m=+0.182743561 container start de6fcc1e329d024a64644bacb6d5c91d972df1267c0cdc85e8e4cd0f71d7df58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:35:49 np0005545273 podman[246832]: 2025-12-04 10:35:49.742237582 +0000 UTC m=+0.187012235 container attach de6fcc1e329d024a64644bacb6d5c91d972df1267c0cdc85e8e4cd0f71d7df58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_ganguly, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  4 05:35:49 np0005545273 blissful_ganguly[246849]: 167 167
Dec  4 05:35:49 np0005545273 systemd[1]: libpod-de6fcc1e329d024a64644bacb6d5c91d972df1267c0cdc85e8e4cd0f71d7df58.scope: Deactivated successfully.
Dec  4 05:35:49 np0005545273 conmon[246849]: conmon de6fcc1e329d024a6464 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-de6fcc1e329d024a64644bacb6d5c91d972df1267c0cdc85e8e4cd0f71d7df58.scope/container/memory.events
Dec  4 05:35:49 np0005545273 podman[246832]: 2025-12-04 10:35:49.745653116 +0000 UTC m=+0.190427789 container died de6fcc1e329d024a64644bacb6d5c91d972df1267c0cdc85e8e4cd0f71d7df58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec  4 05:35:49 np0005545273 systemd[1]: var-lib-containers-storage-overlay-b75abd2673fc8e874efa538d18c996052b004f8a5bcc861c5c2424b09417bf8e-merged.mount: Deactivated successfully.
Dec  4 05:35:49 np0005545273 podman[246832]: 2025-12-04 10:35:49.786215055 +0000 UTC m=+0.230989708 container remove de6fcc1e329d024a64644bacb6d5c91d972df1267c0cdc85e8e4cd0f71d7df58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_ganguly, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec  4 05:35:49 np0005545273 systemd[1]: libpod-conmon-de6fcc1e329d024a64644bacb6d5c91d972df1267c0cdc85e8e4cd0f71d7df58.scope: Deactivated successfully.
Dec  4 05:35:50 np0005545273 podman[246872]: 2025-12-04 10:35:50.036275199 +0000 UTC m=+0.117763345 container create 57c88284a6178e7d1ccf68b7a4811c5d12e4ed463ccefa029d3dfdd3fb8758cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  4 05:35:50 np0005545273 podman[246872]: 2025-12-04 10:35:49.942975821 +0000 UTC m=+0.024463987 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:35:50 np0005545273 systemd[1]: Started libpod-conmon-57c88284a6178e7d1ccf68b7a4811c5d12e4ed463ccefa029d3dfdd3fb8758cc.scope.
Dec  4 05:35:50 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:35:50 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48db6505bb1eaada98011f95eb21f884501f159c79635280dde71a55b1d08941/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:35:50 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48db6505bb1eaada98011f95eb21f884501f159c79635280dde71a55b1d08941/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:35:50 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48db6505bb1eaada98011f95eb21f884501f159c79635280dde71a55b1d08941/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:35:50 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48db6505bb1eaada98011f95eb21f884501f159c79635280dde71a55b1d08941/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:35:50 np0005545273 podman[246872]: 2025-12-04 10:35:50.133866392 +0000 UTC m=+0.215354558 container init 57c88284a6178e7d1ccf68b7a4811c5d12e4ed463ccefa029d3dfdd3fb8758cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec  4 05:35:50 np0005545273 podman[246872]: 2025-12-04 10:35:50.141851466 +0000 UTC m=+0.223339612 container start 57c88284a6178e7d1ccf68b7a4811c5d12e4ed463ccefa029d3dfdd3fb8758cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_bardeen, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:35:50 np0005545273 podman[246872]: 2025-12-04 10:35:50.161437974 +0000 UTC m=+0.242926290 container attach 57c88284a6178e7d1ccf68b7a4811c5d12e4ed463ccefa029d3dfdd3fb8758cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]: {
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:    "0": [
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:        {
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            "devices": [
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "/dev/loop3"
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            ],
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            "lv_name": "ceph_lv0",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            "lv_size": "21470642176",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            "name": "ceph_lv0",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            "tags": {
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.cluster_name": "ceph",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.crush_device_class": "",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.encrypted": "0",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.objectstore": "bluestore",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.osd_id": "0",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.type": "block",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.vdo": "0",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.with_tpm": "0"
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            },
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            "type": "block",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            "vg_name": "ceph_vg0"
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:        }
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:    ],
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:    "1": [
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:        {
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            "devices": [
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "/dev/loop4"
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            ],
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            "lv_name": "ceph_lv1",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            "lv_size": "21470642176",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            "name": "ceph_lv1",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            "tags": {
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.cluster_name": "ceph",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.crush_device_class": "",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.encrypted": "0",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.objectstore": "bluestore",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.osd_id": "1",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.type": "block",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.vdo": "0",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.with_tpm": "0"
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            },
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            "type": "block",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            "vg_name": "ceph_vg1"
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:        }
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:    ],
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:    "2": [
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:        {
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            "devices": [
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "/dev/loop5"
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            ],
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            "lv_name": "ceph_lv2",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            "lv_size": "21470642176",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            "name": "ceph_lv2",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            "tags": {
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.cluster_name": "ceph",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.crush_device_class": "",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.encrypted": "0",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.objectstore": "bluestore",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.osd_id": "2",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.type": "block",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.vdo": "0",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:                "ceph.with_tpm": "0"
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            },
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            "type": "block",
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:            "vg_name": "ceph_vg2"
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:        }
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]:    ]
Dec  4 05:35:50 np0005545273 hopeful_bardeen[246889]: }
Dec  4 05:35:50 np0005545273 systemd[1]: libpod-57c88284a6178e7d1ccf68b7a4811c5d12e4ed463ccefa029d3dfdd3fb8758cc.scope: Deactivated successfully.
Dec  4 05:35:50 np0005545273 podman[246872]: 2025-12-04 10:35:50.445894038 +0000 UTC m=+0.527382204 container died 57c88284a6178e7d1ccf68b7a4811c5d12e4ed463ccefa029d3dfdd3fb8758cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  4 05:35:50 np0005545273 systemd[1]: var-lib-containers-storage-overlay-48db6505bb1eaada98011f95eb21f884501f159c79635280dde71a55b1d08941-merged.mount: Deactivated successfully.
Dec  4 05:35:50 np0005545273 podman[246872]: 2025-12-04 10:35:50.482573233 +0000 UTC m=+0.564061379 container remove 57c88284a6178e7d1ccf68b7a4811c5d12e4ed463ccefa029d3dfdd3fb8758cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Dec  4 05:35:50 np0005545273 systemd[1]: libpod-conmon-57c88284a6178e7d1ccf68b7a4811c5d12e4ed463ccefa029d3dfdd3fb8758cc.scope: Deactivated successfully.
Dec  4 05:35:50 np0005545273 podman[246972]: 2025-12-04 10:35:50.964577899 +0000 UTC m=+0.046302892 container create 374789b156cf375b77be996cb1d9a1e4412afd9dbf5517b5da5a2415fe1fec9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_colden, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec  4 05:35:50 np0005545273 systemd[1]: Started libpod-conmon-374789b156cf375b77be996cb1d9a1e4412afd9dbf5517b5da5a2415fe1fec9c.scope.
Dec  4 05:35:51 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:35:51 np0005545273 podman[246972]: 2025-12-04 10:35:50.948596929 +0000 UTC m=+0.030321942 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:35:51 np0005545273 podman[246972]: 2025-12-04 10:35:51.050346222 +0000 UTC m=+0.132071265 container init 374789b156cf375b77be996cb1d9a1e4412afd9dbf5517b5da5a2415fe1fec9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec  4 05:35:51 np0005545273 podman[246972]: 2025-12-04 10:35:51.05722477 +0000 UTC m=+0.138949763 container start 374789b156cf375b77be996cb1d9a1e4412afd9dbf5517b5da5a2415fe1fec9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_colden, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec  4 05:35:51 np0005545273 podman[246972]: 2025-12-04 10:35:51.06010799 +0000 UTC m=+0.141832983 container attach 374789b156cf375b77be996cb1d9a1e4412afd9dbf5517b5da5a2415fe1fec9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec  4 05:35:51 np0005545273 distracted_colden[246988]: 167 167
Dec  4 05:35:51 np0005545273 systemd[1]: libpod-374789b156cf375b77be996cb1d9a1e4412afd9dbf5517b5da5a2415fe1fec9c.scope: Deactivated successfully.
Dec  4 05:35:51 np0005545273 podman[246972]: 2025-12-04 10:35:51.063940974 +0000 UTC m=+0.145665967 container died 374789b156cf375b77be996cb1d9a1e4412afd9dbf5517b5da5a2415fe1fec9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_colden, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:35:51 np0005545273 systemd[1]: var-lib-containers-storage-overlay-4e6d95de51da5d38b9d2e37988b9e2c1bfaa2469b86b8007f5b0d824edcb3d77-merged.mount: Deactivated successfully.
Dec  4 05:35:51 np0005545273 podman[246972]: 2025-12-04 10:35:51.108246606 +0000 UTC m=+0.189971619 container remove 374789b156cf375b77be996cb1d9a1e4412afd9dbf5517b5da5a2415fe1fec9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_colden, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:35:51 np0005545273 systemd[1]: libpod-conmon-374789b156cf375b77be996cb1d9a1e4412afd9dbf5517b5da5a2415fe1fec9c.scope: Deactivated successfully.
Dec  4 05:35:51 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v750: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:35:51 np0005545273 podman[247011]: 2025-12-04 10:35:51.268381814 +0000 UTC m=+0.043335729 container create 7514dd5339deae5285838ca72e661a8abea10da180c3f6f3465fc37492e70887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:35:51 np0005545273 systemd[1]: Started libpod-conmon-7514dd5339deae5285838ca72e661a8abea10da180c3f6f3465fc37492e70887.scope.
Dec  4 05:35:51 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:35:51 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ee72ac5abe19f6147bba575425053318c0d9b94216d574b8aed7859ced15686/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:35:51 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ee72ac5abe19f6147bba575425053318c0d9b94216d574b8aed7859ced15686/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:35:51 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ee72ac5abe19f6147bba575425053318c0d9b94216d574b8aed7859ced15686/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:35:51 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ee72ac5abe19f6147bba575425053318c0d9b94216d574b8aed7859ced15686/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:35:51 np0005545273 podman[247011]: 2025-12-04 10:35:51.249449473 +0000 UTC m=+0.024403328 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:35:51 np0005545273 podman[247011]: 2025-12-04 10:35:51.35177437 +0000 UTC m=+0.126728225 container init 7514dd5339deae5285838ca72e661a8abea10da180c3f6f3465fc37492e70887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_joliot, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:35:51 np0005545273 podman[247011]: 2025-12-04 10:35:51.359081948 +0000 UTC m=+0.134035773 container start 7514dd5339deae5285838ca72e661a8abea10da180c3f6f3465fc37492e70887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:35:51 np0005545273 podman[247011]: 2025-12-04 10:35:51.362486391 +0000 UTC m=+0.137440226 container attach 7514dd5339deae5285838ca72e661a8abea10da180c3f6f3465fc37492e70887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec  4 05:35:52 np0005545273 lvm[247113]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:35:52 np0005545273 lvm[247108]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:35:52 np0005545273 lvm[247113]: VG ceph_vg1 finished
Dec  4 05:35:52 np0005545273 lvm[247108]: VG ceph_vg0 finished
Dec  4 05:35:52 np0005545273 lvm[247115]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:35:52 np0005545273 lvm[247115]: VG ceph_vg2 finished
Dec  4 05:35:52 np0005545273 lvm[247129]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:35:52 np0005545273 lvm[247129]: VG ceph_vg2 finished
Dec  4 05:35:52 np0005545273 podman[247103]: 2025-12-04 10:35:52.215986285 +0000 UTC m=+0.084039962 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  4 05:35:52 np0005545273 nostalgic_joliot[247028]: {}
Dec  4 05:35:52 np0005545273 systemd[1]: libpod-7514dd5339deae5285838ca72e661a8abea10da180c3f6f3465fc37492e70887.scope: Deactivated successfully.
Dec  4 05:35:52 np0005545273 systemd[1]: libpod-7514dd5339deae5285838ca72e661a8abea10da180c3f6f3465fc37492e70887.scope: Consumed 1.457s CPU time.
Dec  4 05:35:52 np0005545273 podman[247011]: 2025-12-04 10:35:52.267538834 +0000 UTC m=+1.042492669 container died 7514dd5339deae5285838ca72e661a8abea10da180c3f6f3465fc37492e70887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_joliot, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  4 05:35:52 np0005545273 systemd[1]: var-lib-containers-storage-overlay-5ee72ac5abe19f6147bba575425053318c0d9b94216d574b8aed7859ced15686-merged.mount: Deactivated successfully.
Dec  4 05:35:52 np0005545273 podman[247011]: 2025-12-04 10:35:52.312751117 +0000 UTC m=+1.087704952 container remove 7514dd5339deae5285838ca72e661a8abea10da180c3f6f3465fc37492e70887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_joliot, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  4 05:35:52 np0005545273 systemd[1]: libpod-conmon-7514dd5339deae5285838ca72e661a8abea10da180c3f6f3465fc37492e70887.scope: Deactivated successfully.
Dec  4 05:35:52 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:35:52 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:35:52 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:35:52 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:35:53 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v751: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:35:53 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:35:53 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:35:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:35:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:35:54.902 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:35:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:35:54.903 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:35:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:35:54.903 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:35:55 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v752: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:35:57 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v753: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:35:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:35:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:35:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:35:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:35:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:35:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:35:58 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:35:59 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v754: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:36:00 np0005545273 podman[247174]: 2025-12-04 10:36:00.983905985 +0000 UTC m=+0.083029297 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:36:01 np0005545273 podman[247173]: 2025-12-04 10:36:01.003929223 +0000 UTC m=+0.103230540 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  4 05:36:01 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v755: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:36:03 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v756: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:36:03 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:36:05 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v757: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:36:07 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v758: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:36:08 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:36:09 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v759: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:36:11 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v760: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:36:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  4 05:36:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/528378920' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec  4 05:36:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  4 05:36:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/528378920' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec  4 05:36:13 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v761: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:36:13 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:36:15 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v762: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:36:17 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v763: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:36:18 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:36:19 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v764: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:36:19 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Dec  4 05:36:19 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:36:19.306157) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  4 05:36:19 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Dec  4 05:36:19 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844579306206, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1516, "num_deletes": 251, "total_data_size": 2463957, "memory_usage": 2498784, "flush_reason": "Manual Compaction"}
Dec  4 05:36:19 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Dec  4 05:36:19 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844579329191, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2419087, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14868, "largest_seqno": 16383, "table_properties": {"data_size": 2412016, "index_size": 4142, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14465, "raw_average_key_size": 19, "raw_value_size": 2397875, "raw_average_value_size": 3271, "num_data_blocks": 189, "num_entries": 733, "num_filter_entries": 733, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764844419, "oldest_key_time": 1764844419, "file_creation_time": 1764844579, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:36:19 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 23103 microseconds, and 9140 cpu microseconds.
Dec  4 05:36:19 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  4 05:36:19 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:36:19.329258) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2419087 bytes OK
Dec  4 05:36:19 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:36:19.329316) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Dec  4 05:36:19 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:36:19.332608) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Dec  4 05:36:19 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:36:19.332629) EVENT_LOG_v1 {"time_micros": 1764844579332623, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  4 05:36:19 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:36:19.332659) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  4 05:36:19 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2457318, prev total WAL file size 2457318, number of live WAL files 2.
Dec  4 05:36:19 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:36:19 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:36:19.333720) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Dec  4 05:36:19 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  4 05:36:19 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2362KB)], [35(7136KB)]
Dec  4 05:36:19 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844579333811, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9726483, "oldest_snapshot_seqno": -1}
Dec  4 05:36:19 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4056 keys, 7920654 bytes, temperature: kUnknown
Dec  4 05:36:19 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844579393987, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7920654, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7891377, "index_size": 18031, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10181, "raw_key_size": 99157, "raw_average_key_size": 24, "raw_value_size": 7815815, "raw_average_value_size": 1926, "num_data_blocks": 763, "num_entries": 4056, "num_filter_entries": 4056, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 1764844579, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:36:19 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  4 05:36:19 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:36:19.394310) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7920654 bytes
Dec  4 05:36:19 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:36:19.396210) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.4 rd, 131.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 7.0 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(7.3) write-amplify(3.3) OK, records in: 4570, records dropped: 514 output_compression: NoCompression
Dec  4 05:36:19 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:36:19.396234) EVENT_LOG_v1 {"time_micros": 1764844579396222, "job": 16, "event": "compaction_finished", "compaction_time_micros": 60258, "compaction_time_cpu_micros": 18332, "output_level": 6, "num_output_files": 1, "total_output_size": 7920654, "num_input_records": 4570, "num_output_records": 4056, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  4 05:36:19 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:36:19 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844579396721, "job": 16, "event": "table_file_deletion", "file_number": 37}
Dec  4 05:36:19 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:36:19 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844579398356, "job": 16, "event": "table_file_deletion", "file_number": 35}
Dec  4 05:36:19 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:36:19.333584) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:36:19 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:36:19.398459) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:36:19 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:36:19.398468) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:36:19 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:36:19.398470) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:36:19 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:36:19.398473) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:36:19 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:36:19.398475) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:36:21 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v765: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:36:22 np0005545273 podman[247222]: 2025-12-04 10:36:22.974429098 +0000 UTC m=+0.073250639 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd)
Dec  4 05:36:23 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v766: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:36:23 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:36:25 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v767: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:36:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:36:26
Dec  4 05:36:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:36:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:36:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'volumes', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'backups', 'default.rgw.log', 'vms', '.rgw.root', 'cephfs.cephfs.meta']
Dec  4 05:36:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:36:27 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v768: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:36:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:36:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:36:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:36:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:36:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:36:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:36:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:36:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:36:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:36:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:36:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:36:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:36:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:36:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:36:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:36:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:36:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:36:29 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v769: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:36:31 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v770: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:36:31 np0005545273 podman[247244]: 2025-12-04 10:36:31.970150078 +0000 UTC m=+0.064880466 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  4 05:36:31 np0005545273 podman[247243]: 2025-12-04 10:36:31.979749662 +0000 UTC m=+0.083299755 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  4 05:36:33 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v771: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:36:33 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:36:35 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v772: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:36:36 np0005545273 nova_compute[244644]: 2025-12-04 10:36:36.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:36:36 np0005545273 nova_compute[244644]: 2025-12-04 10:36:36.338 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  4 05:36:36 np0005545273 nova_compute[244644]: 2025-12-04 10:36:36.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  4 05:36:36 np0005545273 nova_compute[244644]: 2025-12-04 10:36:36.848 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  4 05:36:36 np0005545273 nova_compute[244644]: 2025-12-04 10:36:36.848 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:36:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:36:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:36:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:36:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:36:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:36:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:36:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:36:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:36:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:36:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:36:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:36:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:36:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec  4 05:36:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:36:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:36:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:36:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:36:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:36:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 05:36:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:36:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:36:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:36:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 05:36:37 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v773: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:36:37 np0005545273 nova_compute[244644]: 2025-12-04 10:36:37.833 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:36:37 np0005545273 nova_compute[244644]: 2025-12-04 10:36:37.833 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:36:37 np0005545273 nova_compute[244644]: 2025-12-04 10:36:37.834 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:36:37 np0005545273 nova_compute[244644]: 2025-12-04 10:36:37.834 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  4 05:36:37 np0005545273 nova_compute[244644]: 2025-12-04 10:36:37.834 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:36:38 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:36:38 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3744744767' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:36:38 np0005545273 nova_compute[244644]: 2025-12-04 10:36:38.403 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:36:38 np0005545273 nova_compute[244644]: 2025-12-04 10:36:38.565 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  4 05:36:38 np0005545273 nova_compute[244644]: 2025-12-04 10:36:38.567 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5153MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  4 05:36:38 np0005545273 nova_compute[244644]: 2025-12-04 10:36:38.567 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:36:38 np0005545273 nova_compute[244644]: 2025-12-04 10:36:38.567 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:36:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:36:39 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v774: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:36:39 np0005545273 nova_compute[244644]: 2025-12-04 10:36:39.913 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  4 05:36:39 np0005545273 nova_compute[244644]: 2025-12-04 10:36:39.914 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  4 05:36:39 np0005545273 nova_compute[244644]: 2025-12-04 10:36:39.934 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:36:40 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:36:40 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1788744034' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:36:40 np0005545273 nova_compute[244644]: 2025-12-04 10:36:40.548 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:36:40 np0005545273 nova_compute[244644]: 2025-12-04 10:36:40.557 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  4 05:36:40 np0005545273 nova_compute[244644]: 2025-12-04 10:36:40.777 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  4 05:36:40 np0005545273 nova_compute[244644]: 2025-12-04 10:36:40.778 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  4 05:36:40 np0005545273 nova_compute[244644]: 2025-12-04 10:36:40.778 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.211s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:36:41 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v775: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:36:41 np0005545273 nova_compute[244644]: 2025-12-04 10:36:41.268 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:36:41 np0005545273 nova_compute[244644]: 2025-12-04 10:36:41.269 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:36:41 np0005545273 nova_compute[244644]: 2025-12-04 10:36:41.269 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:36:41 np0005545273 nova_compute[244644]: 2025-12-04 10:36:41.269 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:36:41 np0005545273 nova_compute[244644]: 2025-12-04 10:36:41.269 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:36:41 np0005545273 nova_compute[244644]: 2025-12-04 10:36:41.270 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:36:41 np0005545273 nova_compute[244644]: 2025-12-04 10:36:41.270 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:36:41 np0005545273 nova_compute[244644]: 2025-12-04 10:36:41.270 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  4 05:36:43 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v776: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:36:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:36:45 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v777: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:36:47 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v778: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:36:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:36:49 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v779: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:36:51 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v780: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:36:52 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:36:52 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:36:52 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:36:52 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:36:53 np0005545273 podman[247452]: 2025-12-04 10:36:53.096863617 +0000 UTC m=+0.070108404 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125)
Dec  4 05:36:53 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v781: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:36:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec  4 05:36:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec  4 05:36:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:36:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:36:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:36:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:36:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:36:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:36:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:36:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:36:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:36:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:36:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:36:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:36:53 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:36:53 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:36:53 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec  4 05:36:53 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:36:53 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:36:53 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:36:53 np0005545273 podman[247567]: 2025-12-04 10:36:53.920211786 +0000 UTC m=+0.040405366 container create 659e471e03323567bf1b6c1e76319e9b12490543822632f93f7beefed5e00ee2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_sinoussi, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:36:53 np0005545273 systemd[1]: Started libpod-conmon-659e471e03323567bf1b6c1e76319e9b12490543822632f93f7beefed5e00ee2.scope.
Dec  4 05:36:53 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:36:53 np0005545273 podman[247567]: 2025-12-04 10:36:53.992976886 +0000 UTC m=+0.113170466 container init 659e471e03323567bf1b6c1e76319e9b12490543822632f93f7beefed5e00ee2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:36:53 np0005545273 podman[247567]: 2025-12-04 10:36:53.902408513 +0000 UTC m=+0.022602113 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:36:53 np0005545273 podman[247567]: 2025-12-04 10:36:53.999216461 +0000 UTC m=+0.119410041 container start 659e471e03323567bf1b6c1e76319e9b12490543822632f93f7beefed5e00ee2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_sinoussi, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec  4 05:36:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:36:54 np0005545273 podman[247567]: 2025-12-04 10:36:54.002744319 +0000 UTC m=+0.122937919 container attach 659e471e03323567bf1b6c1e76319e9b12490543822632f93f7beefed5e00ee2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec  4 05:36:54 np0005545273 wizardly_sinoussi[247583]: 167 167
Dec  4 05:36:54 np0005545273 systemd[1]: libpod-659e471e03323567bf1b6c1e76319e9b12490543822632f93f7beefed5e00ee2.scope: Deactivated successfully.
Dec  4 05:36:54 np0005545273 podman[247567]: 2025-12-04 10:36:54.004151384 +0000 UTC m=+0.124344964 container died 659e471e03323567bf1b6c1e76319e9b12490543822632f93f7beefed5e00ee2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_sinoussi, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec  4 05:36:54 np0005545273 systemd[1]: var-lib-containers-storage-overlay-f873706135dd6ba5060c75cd54ba8a53f1cd5ebf694a0904a96804cf09a5837e-merged.mount: Deactivated successfully.
Dec  4 05:36:54 np0005545273 podman[247567]: 2025-12-04 10:36:54.042564099 +0000 UTC m=+0.162757679 container remove 659e471e03323567bf1b6c1e76319e9b12490543822632f93f7beefed5e00ee2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_sinoussi, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:36:54 np0005545273 systemd[1]: libpod-conmon-659e471e03323567bf1b6c1e76319e9b12490543822632f93f7beefed5e00ee2.scope: Deactivated successfully.
Dec  4 05:36:54 np0005545273 podman[247607]: 2025-12-04 10:36:54.196073488 +0000 UTC m=+0.041422952 container create 9144a96236056b83a683ce08bc105be36f0a844f7d6efaaa62e42d06894c7a3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  4 05:36:54 np0005545273 systemd[1]: Started libpod-conmon-9144a96236056b83a683ce08bc105be36f0a844f7d6efaaa62e42d06894c7a3a.scope.
Dec  4 05:36:54 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:36:54 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6565cd566cbc100fb84692ffbd5ba83e4c4a20c400d190940857b778869a30b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:36:54 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6565cd566cbc100fb84692ffbd5ba83e4c4a20c400d190940857b778869a30b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:36:54 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6565cd566cbc100fb84692ffbd5ba83e4c4a20c400d190940857b778869a30b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:36:54 np0005545273 podman[247607]: 2025-12-04 10:36:54.178290675 +0000 UTC m=+0.023640169 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:36:54 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6565cd566cbc100fb84692ffbd5ba83e4c4a20c400d190940857b778869a30b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:36:54 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6565cd566cbc100fb84692ffbd5ba83e4c4a20c400d190940857b778869a30b1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:36:54 np0005545273 podman[247607]: 2025-12-04 10:36:54.293317736 +0000 UTC m=+0.138667220 container init 9144a96236056b83a683ce08bc105be36f0a844f7d6efaaa62e42d06894c7a3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_ellis, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:36:54 np0005545273 podman[247607]: 2025-12-04 10:36:54.301017728 +0000 UTC m=+0.146367192 container start 9144a96236056b83a683ce08bc105be36f0a844f7d6efaaa62e42d06894c7a3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_ellis, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:36:54 np0005545273 podman[247607]: 2025-12-04 10:36:54.307582681 +0000 UTC m=+0.152932175 container attach 9144a96236056b83a683ce08bc105be36f0a844f7d6efaaa62e42d06894c7a3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_ellis, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Dec  4 05:36:54 np0005545273 modest_ellis[247624]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:36:54 np0005545273 modest_ellis[247624]: --> All data devices are unavailable
Dec  4 05:36:54 np0005545273 systemd[1]: libpod-9144a96236056b83a683ce08bc105be36f0a844f7d6efaaa62e42d06894c7a3a.scope: Deactivated successfully.
Dec  4 05:36:54 np0005545273 podman[247607]: 2025-12-04 10:36:54.842724902 +0000 UTC m=+0.688074366 container died 9144a96236056b83a683ce08bc105be36f0a844f7d6efaaa62e42d06894c7a3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_ellis, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec  4 05:36:54 np0005545273 systemd[1]: var-lib-containers-storage-overlay-6565cd566cbc100fb84692ffbd5ba83e4c4a20c400d190940857b778869a30b1-merged.mount: Deactivated successfully.
Dec  4 05:36:54 np0005545273 podman[247607]: 2025-12-04 10:36:54.888543561 +0000 UTC m=+0.733893025 container remove 9144a96236056b83a683ce08bc105be36f0a844f7d6efaaa62e42d06894c7a3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec  4 05:36:54 np0005545273 systemd[1]: libpod-conmon-9144a96236056b83a683ce08bc105be36f0a844f7d6efaaa62e42d06894c7a3a.scope: Deactivated successfully.
Dec  4 05:36:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:36:54.902 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:36:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:36:54.904 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:36:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:36:54.904 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:36:55 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v782: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:36:55 np0005545273 podman[247716]: 2025-12-04 10:36:55.332619987 +0000 UTC m=+0.046455947 container create 9cd9d018d0580217db63826ebde06df7062cdcad38feb08639bc82deddbd075c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:36:55 np0005545273 systemd[1]: Started libpod-conmon-9cd9d018d0580217db63826ebde06df7062cdcad38feb08639bc82deddbd075c.scope.
Dec  4 05:36:55 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:36:55 np0005545273 podman[247716]: 2025-12-04 10:36:55.310799394 +0000 UTC m=+0.024635404 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:36:55 np0005545273 podman[247716]: 2025-12-04 10:36:55.444400156 +0000 UTC m=+0.158236136 container init 9cd9d018d0580217db63826ebde06df7062cdcad38feb08639bc82deddbd075c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_neumann, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:36:55 np0005545273 podman[247716]: 2025-12-04 10:36:55.450324174 +0000 UTC m=+0.164160174 container start 9cd9d018d0580217db63826ebde06df7062cdcad38feb08639bc82deddbd075c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_neumann, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  4 05:36:55 np0005545273 cranky_neumann[247734]: 167 167
Dec  4 05:36:55 np0005545273 systemd[1]: libpod-9cd9d018d0580217db63826ebde06df7062cdcad38feb08639bc82deddbd075c.scope: Deactivated successfully.
Dec  4 05:36:55 np0005545273 podman[247716]: 2025-12-04 10:36:55.454664122 +0000 UTC m=+0.168500082 container attach 9cd9d018d0580217db63826ebde06df7062cdcad38feb08639bc82deddbd075c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  4 05:36:55 np0005545273 podman[247716]: 2025-12-04 10:36:55.455266647 +0000 UTC m=+0.169102607 container died 9cd9d018d0580217db63826ebde06df7062cdcad38feb08639bc82deddbd075c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_neumann, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:36:55 np0005545273 systemd[1]: var-lib-containers-storage-overlay-c580f1e00629821d687b7442c110f02bbb6b0890aa522b6af8365141c09e4a3f-merged.mount: Deactivated successfully.
Dec  4 05:36:55 np0005545273 podman[247716]: 2025-12-04 10:36:55.50002666 +0000 UTC m=+0.213862630 container remove 9cd9d018d0580217db63826ebde06df7062cdcad38feb08639bc82deddbd075c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_neumann, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:36:55 np0005545273 systemd[1]: libpod-conmon-9cd9d018d0580217db63826ebde06df7062cdcad38feb08639bc82deddbd075c.scope: Deactivated successfully.
Dec  4 05:36:55 np0005545273 podman[247760]: 2025-12-04 10:36:55.665447174 +0000 UTC m=+0.048963188 container create a968a09bf9bc313fbd5b45e6aea52972321ccf47d149985a06026a95ab00d033 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  4 05:36:55 np0005545273 systemd[1]: Started libpod-conmon-a968a09bf9bc313fbd5b45e6aea52972321ccf47d149985a06026a95ab00d033.scope.
Dec  4 05:36:55 np0005545273 podman[247760]: 2025-12-04 10:36:55.642504784 +0000 UTC m=+0.026020848 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:36:55 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:36:55 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/879ba2e7ee5c091de09cd3b9aa54632394fef58431540068ca32b55f9237f8f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:36:55 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/879ba2e7ee5c091de09cd3b9aa54632394fef58431540068ca32b55f9237f8f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:36:55 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/879ba2e7ee5c091de09cd3b9aa54632394fef58431540068ca32b55f9237f8f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:36:55 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/879ba2e7ee5c091de09cd3b9aa54632394fef58431540068ca32b55f9237f8f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:36:55 np0005545273 podman[247760]: 2025-12-04 10:36:55.775585074 +0000 UTC m=+0.159101178 container init a968a09bf9bc313fbd5b45e6aea52972321ccf47d149985a06026a95ab00d033 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_gates, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:36:55 np0005545273 podman[247760]: 2025-12-04 10:36:55.782623029 +0000 UTC m=+0.166139053 container start a968a09bf9bc313fbd5b45e6aea52972321ccf47d149985a06026a95ab00d033 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_gates, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  4 05:36:55 np0005545273 podman[247760]: 2025-12-04 10:36:55.787014548 +0000 UTC m=+0.170530602 container attach a968a09bf9bc313fbd5b45e6aea52972321ccf47d149985a06026a95ab00d033 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  4 05:36:56 np0005545273 stoic_gates[247776]: {
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:    "0": [
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:        {
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            "devices": [
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "/dev/loop3"
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            ],
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            "lv_name": "ceph_lv0",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            "lv_size": "21470642176",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            "name": "ceph_lv0",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            "tags": {
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.cluster_name": "ceph",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.crush_device_class": "",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.encrypted": "0",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.objectstore": "bluestore",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.osd_id": "0",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.type": "block",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.vdo": "0",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.with_tpm": "0"
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            },
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            "type": "block",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            "vg_name": "ceph_vg0"
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:        }
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:    ],
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:    "1": [
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:        {
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            "devices": [
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "/dev/loop4"
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            ],
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            "lv_name": "ceph_lv1",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            "lv_size": "21470642176",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            "name": "ceph_lv1",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            "tags": {
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.cluster_name": "ceph",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.crush_device_class": "",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.encrypted": "0",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.objectstore": "bluestore",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.osd_id": "1",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.type": "block",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.vdo": "0",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.with_tpm": "0"
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            },
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            "type": "block",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            "vg_name": "ceph_vg1"
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:        }
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:    ],
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:    "2": [
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:        {
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            "devices": [
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "/dev/loop5"
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            ],
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            "lv_name": "ceph_lv2",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            "lv_size": "21470642176",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            "name": "ceph_lv2",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            "tags": {
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.cluster_name": "ceph",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.crush_device_class": "",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.encrypted": "0",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.objectstore": "bluestore",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.osd_id": "2",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.type": "block",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.vdo": "0",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:                "ceph.with_tpm": "0"
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            },
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            "type": "block",
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:            "vg_name": "ceph_vg2"
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:        }
Dec  4 05:36:56 np0005545273 stoic_gates[247776]:    ]
Dec  4 05:36:56 np0005545273 stoic_gates[247776]: }
Dec  4 05:36:56 np0005545273 systemd[1]: libpod-a968a09bf9bc313fbd5b45e6aea52972321ccf47d149985a06026a95ab00d033.scope: Deactivated successfully.
Dec  4 05:36:56 np0005545273 podman[247760]: 2025-12-04 10:36:56.09664807 +0000 UTC m=+0.480164124 container died a968a09bf9bc313fbd5b45e6aea52972321ccf47d149985a06026a95ab00d033 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_gates, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec  4 05:36:56 np0005545273 systemd[1]: var-lib-containers-storage-overlay-879ba2e7ee5c091de09cd3b9aa54632394fef58431540068ca32b55f9237f8f8-merged.mount: Deactivated successfully.
Dec  4 05:36:56 np0005545273 podman[247760]: 2025-12-04 10:36:56.149704599 +0000 UTC m=+0.533220623 container remove a968a09bf9bc313fbd5b45e6aea52972321ccf47d149985a06026a95ab00d033 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec  4 05:36:56 np0005545273 systemd[1]: libpod-conmon-a968a09bf9bc313fbd5b45e6aea52972321ccf47d149985a06026a95ab00d033.scope: Deactivated successfully.
Dec  4 05:36:56 np0005545273 podman[247858]: 2025-12-04 10:36:56.602932923 +0000 UTC m=+0.037790631 container create 21b0251b30beb788bdf90a4950c1a9ae88d07bc16d9879c6fb93706fbbed873c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_poincare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  4 05:36:56 np0005545273 systemd[1]: Started libpod-conmon-21b0251b30beb788bdf90a4950c1a9ae88d07bc16d9879c6fb93706fbbed873c.scope.
Dec  4 05:36:56 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:36:56 np0005545273 podman[247858]: 2025-12-04 10:36:56.586139395 +0000 UTC m=+0.020997133 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:36:56 np0005545273 podman[247858]: 2025-12-04 10:36:56.683776794 +0000 UTC m=+0.118634522 container init 21b0251b30beb788bdf90a4950c1a9ae88d07bc16d9879c6fb93706fbbed873c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_poincare, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:36:56 np0005545273 podman[247858]: 2025-12-04 10:36:56.689819974 +0000 UTC m=+0.124677682 container start 21b0251b30beb788bdf90a4950c1a9ae88d07bc16d9879c6fb93706fbbed873c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_poincare, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  4 05:36:56 np0005545273 podman[247858]: 2025-12-04 10:36:56.693467115 +0000 UTC m=+0.128324843 container attach 21b0251b30beb788bdf90a4950c1a9ae88d07bc16d9879c6fb93706fbbed873c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_poincare, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec  4 05:36:56 np0005545273 systemd[1]: libpod-21b0251b30beb788bdf90a4950c1a9ae88d07bc16d9879c6fb93706fbbed873c.scope: Deactivated successfully.
Dec  4 05:36:56 np0005545273 relaxed_poincare[247874]: 167 167
Dec  4 05:36:56 np0005545273 conmon[247874]: conmon 21b0251b30beb788bdf9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-21b0251b30beb788bdf90a4950c1a9ae88d07bc16d9879c6fb93706fbbed873c.scope/container/memory.events
Dec  4 05:36:56 np0005545273 podman[247858]: 2025-12-04 10:36:56.69529406 +0000 UTC m=+0.130151768 container died 21b0251b30beb788bdf90a4950c1a9ae88d07bc16d9879c6fb93706fbbed873c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_poincare, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:36:56 np0005545273 systemd[1]: var-lib-containers-storage-overlay-81e3c92a34f5f749f7f8f35f9396938ced23bb76339901e02eeea13e943bab19-merged.mount: Deactivated successfully.
Dec  4 05:36:56 np0005545273 podman[247858]: 2025-12-04 10:36:56.7367337 +0000 UTC m=+0.171591438 container remove 21b0251b30beb788bdf90a4950c1a9ae88d07bc16d9879c6fb93706fbbed873c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_poincare, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:36:56 np0005545273 systemd[1]: libpod-conmon-21b0251b30beb788bdf90a4950c1a9ae88d07bc16d9879c6fb93706fbbed873c.scope: Deactivated successfully.
Dec  4 05:36:56 np0005545273 podman[247896]: 2025-12-04 10:36:56.938240852 +0000 UTC m=+0.065180061 container create c2b76a6cb71e3ea403e781547f495efe3cd525c51b6ce6dbcf80749fb0fbcc40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_nightingale, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:36:56 np0005545273 systemd[1]: Started libpod-conmon-c2b76a6cb71e3ea403e781547f495efe3cd525c51b6ce6dbcf80749fb0fbcc40.scope.
Dec  4 05:36:57 np0005545273 podman[247896]: 2025-12-04 10:36:56.911693712 +0000 UTC m=+0.038633001 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:36:57 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:36:57 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c2326a7d754a825c56125195b4e961f7448ec224e544e4a53dda0b887bf644e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:36:57 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c2326a7d754a825c56125195b4e961f7448ec224e544e4a53dda0b887bf644e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:36:57 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c2326a7d754a825c56125195b4e961f7448ec224e544e4a53dda0b887bf644e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:36:57 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c2326a7d754a825c56125195b4e961f7448ec224e544e4a53dda0b887bf644e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:36:57 np0005545273 podman[247896]: 2025-12-04 10:36:57.043987243 +0000 UTC m=+0.170926522 container init c2b76a6cb71e3ea403e781547f495efe3cd525c51b6ce6dbcf80749fb0fbcc40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec  4 05:36:57 np0005545273 podman[247896]: 2025-12-04 10:36:57.055088819 +0000 UTC m=+0.182028078 container start c2b76a6cb71e3ea403e781547f495efe3cd525c51b6ce6dbcf80749fb0fbcc40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec  4 05:36:57 np0005545273 podman[247896]: 2025-12-04 10:36:57.059373156 +0000 UTC m=+0.186312385 container attach c2b76a6cb71e3ea403e781547f495efe3cd525c51b6ce6dbcf80749fb0fbcc40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec  4 05:36:57 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v783: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Dec  4 05:36:57 np0005545273 lvm[247991]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:36:57 np0005545273 lvm[247991]: VG ceph_vg0 finished
Dec  4 05:36:57 np0005545273 lvm[247993]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:36:57 np0005545273 lvm[247993]: VG ceph_vg1 finished
Dec  4 05:36:57 np0005545273 lvm[247995]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:36:57 np0005545273 lvm[247995]: VG ceph_vg2 finished
Dec  4 05:36:57 np0005545273 lvm[247996]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:36:57 np0005545273 lvm[247996]: VG ceph_vg1 finished
Dec  4 05:36:57 np0005545273 tender_nightingale[247914]: {}
Dec  4 05:36:57 np0005545273 systemd[1]: libpod-c2b76a6cb71e3ea403e781547f495efe3cd525c51b6ce6dbcf80749fb0fbcc40.scope: Deactivated successfully.
Dec  4 05:36:57 np0005545273 systemd[1]: libpod-c2b76a6cb71e3ea403e781547f495efe3cd525c51b6ce6dbcf80749fb0fbcc40.scope: Consumed 1.315s CPU time.
Dec  4 05:36:57 np0005545273 podman[247896]: 2025-12-04 10:36:57.856630655 +0000 UTC m=+0.983569864 container died c2b76a6cb71e3ea403e781547f495efe3cd525c51b6ce6dbcf80749fb0fbcc40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec  4 05:36:57 np0005545273 systemd[1]: var-lib-containers-storage-overlay-0c2326a7d754a825c56125195b4e961f7448ec224e544e4a53dda0b887bf644e-merged.mount: Deactivated successfully.
Dec  4 05:36:57 np0005545273 podman[247896]: 2025-12-04 10:36:57.901224015 +0000 UTC m=+1.028163224 container remove c2b76a6cb71e3ea403e781547f495efe3cd525c51b6ce6dbcf80749fb0fbcc40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_nightingale, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  4 05:36:57 np0005545273 systemd[1]: libpod-conmon-c2b76a6cb71e3ea403e781547f495efe3cd525c51b6ce6dbcf80749fb0fbcc40.scope: Deactivated successfully.
Dec  4 05:36:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:36:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:36:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:36:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:36:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:36:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:36:57 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:36:58 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:36:58 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:36:58 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:36:58 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:36:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:36:59 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v784: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  4 05:36:59 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:37:01 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v785: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  4 05:37:02 np0005545273 podman[248044]: 2025-12-04 10:37:02.963149138 +0000 UTC m=+0.065061748 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  4 05:37:02 np0005545273 podman[248043]: 2025-12-04 10:37:02.990745225 +0000 UTC m=+0.091834794 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller)
Dec  4 05:37:03 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v786: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  4 05:37:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:37:05 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v787: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  4 05:37:07 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v788: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  4 05:37:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:37:09 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v789: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Dec  4 05:37:11 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v790: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:37:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  4 05:37:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/833328030' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec  4 05:37:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  4 05:37:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/833328030' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec  4 05:37:13 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v791: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:37:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:37:15 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v792: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:37:17 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v793: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:37:19 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:37:19 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v794: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:37:21 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v795: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:37:23 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v796: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:37:23 np0005545273 podman[248089]: 2025-12-04 10:37:23.93922811 +0000 UTC m=+0.049026011 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  4 05:37:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:37:25 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v797: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:37:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:37:26
Dec  4 05:37:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:37:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:37:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['.mgr', 'volumes', 'vms', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'default.rgw.control']
Dec  4 05:37:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:37:27 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v798: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:37:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:37:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:37:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:37:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:37:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:37:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:37:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:37:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:37:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:37:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:37:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:37:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:37:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:37:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:37:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:37:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:37:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:37:29 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v799: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:37:31 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v800: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:37:33 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v801: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:37:33 np0005545273 podman[248114]: 2025-12-04 10:37:33.942119399 +0000 UTC m=+0.053314307 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent)
Dec  4 05:37:33 np0005545273 podman[248113]: 2025-12-04 10:37:33.975458458 +0000 UTC m=+0.086653226 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  4 05:37:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:37:35 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v802: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:37:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:37:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:37:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:37:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:37:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:37:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:37:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:37:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:37:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:37:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:37:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:37:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:37:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec  4 05:37:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:37:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:37:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:37:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:37:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:37:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 05:37:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:37:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:37:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:37:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 05:37:37 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v803: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:37:37 np0005545273 nova_compute[244644]: 2025-12-04 10:37:37.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:37:37 np0005545273 nova_compute[244644]: 2025-12-04 10:37:37.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  4 05:37:37 np0005545273 nova_compute[244644]: 2025-12-04 10:37:37.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  4 05:37:37 np0005545273 nova_compute[244644]: 2025-12-04 10:37:37.353 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  4 05:37:38 np0005545273 nova_compute[244644]: 2025-12-04 10:37:38.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:37:38 np0005545273 nova_compute[244644]: 2025-12-04 10:37:38.360 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:37:38 np0005545273 nova_compute[244644]: 2025-12-04 10:37:38.360 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:37:38 np0005545273 nova_compute[244644]: 2025-12-04 10:37:38.360 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:37:38 np0005545273 nova_compute[244644]: 2025-12-04 10:37:38.393 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:37:38 np0005545273 nova_compute[244644]: 2025-12-04 10:37:38.393 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:37:38 np0005545273 nova_compute[244644]: 2025-12-04 10:37:38.394 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:37:38 np0005545273 nova_compute[244644]: 2025-12-04 10:37:38.394 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  4 05:37:38 np0005545273 nova_compute[244644]: 2025-12-04 10:37:38.394 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:37:38 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:37:38 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3614907084' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:37:38 np0005545273 nova_compute[244644]: 2025-12-04 10:37:38.946 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:37:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:37:39 np0005545273 nova_compute[244644]: 2025-12-04 10:37:39.149 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  4 05:37:39 np0005545273 nova_compute[244644]: 2025-12-04 10:37:39.150 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5138MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  4 05:37:39 np0005545273 nova_compute[244644]: 2025-12-04 10:37:39.150 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:37:39 np0005545273 nova_compute[244644]: 2025-12-04 10:37:39.151 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:37:39 np0005545273 nova_compute[244644]: 2025-12-04 10:37:39.208 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  4 05:37:39 np0005545273 nova_compute[244644]: 2025-12-04 10:37:39.209 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  4 05:37:39 np0005545273 nova_compute[244644]: 2025-12-04 10:37:39.225 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:37:39 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v804: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:37:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:37:39 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1650420587' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:37:39 np0005545273 nova_compute[244644]: 2025-12-04 10:37:39.746 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:37:39 np0005545273 nova_compute[244644]: 2025-12-04 10:37:39.751 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  4 05:37:39 np0005545273 nova_compute[244644]: 2025-12-04 10:37:39.768 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  4 05:37:39 np0005545273 nova_compute[244644]: 2025-12-04 10:37:39.771 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  4 05:37:39 np0005545273 nova_compute[244644]: 2025-12-04 10:37:39.771 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:37:40 np0005545273 nova_compute[244644]: 2025-12-04 10:37:40.750 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:37:40 np0005545273 nova_compute[244644]: 2025-12-04 10:37:40.750 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:37:40 np0005545273 nova_compute[244644]: 2025-12-04 10:37:40.750 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:37:40 np0005545273 nova_compute[244644]: 2025-12-04 10:37:40.751 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:37:40 np0005545273 nova_compute[244644]: 2025-12-04 10:37:40.751 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:37:40 np0005545273 nova_compute[244644]: 2025-12-04 10:37:40.751 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  4 05:37:41 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v805: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:37:43 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v806: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:37:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:37:45 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v807: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:37:47 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v808: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:37:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:37:49 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v809: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:37:51 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v810: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:37:53 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v811: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:37:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:37:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:37:54.903 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:37:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:37:54.903 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:37:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:37:54.903 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:37:54 np0005545273 podman[248202]: 2025-12-04 10:37:54.944007322 +0000 UTC m=+0.056739583 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  4 05:37:55 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v812: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:37:57 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v813: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:37:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:37:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:37:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:37:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:37:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:37:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:37:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:37:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:37:59 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:37:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:37:59 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:37:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:37:59 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:37:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:37:59 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:37:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:37:59 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:37:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:37:59 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:37:59 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v814: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:37:59 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:37:59 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:37:59 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:37:59 np0005545273 podman[248367]: 2025-12-04 10:37:59.49392682 +0000 UTC m=+0.040366454 container create 0a94fda662edbc9a771070e1f9325302805577be5d2b5c9d28649b2ed053210b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:37:59 np0005545273 systemd[1]: Started libpod-conmon-0a94fda662edbc9a771070e1f9325302805577be5d2b5c9d28649b2ed053210b.scope.
Dec  4 05:37:59 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:37:59 np0005545273 podman[248367]: 2025-12-04 10:37:59.476504337 +0000 UTC m=+0.022943991 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:37:59 np0005545273 podman[248367]: 2025-12-04 10:37:59.585429077 +0000 UTC m=+0.131868721 container init 0a94fda662edbc9a771070e1f9325302805577be5d2b5c9d28649b2ed053210b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_zhukovsky, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec  4 05:37:59 np0005545273 podman[248367]: 2025-12-04 10:37:59.592600635 +0000 UTC m=+0.139040279 container start 0a94fda662edbc9a771070e1f9325302805577be5d2b5c9d28649b2ed053210b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:37:59 np0005545273 podman[248367]: 2025-12-04 10:37:59.596430161 +0000 UTC m=+0.142869845 container attach 0a94fda662edbc9a771070e1f9325302805577be5d2b5c9d28649b2ed053210b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_zhukovsky, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:37:59 np0005545273 elated_zhukovsky[248384]: 167 167
Dec  4 05:37:59 np0005545273 systemd[1]: libpod-0a94fda662edbc9a771070e1f9325302805577be5d2b5c9d28649b2ed053210b.scope: Deactivated successfully.
Dec  4 05:37:59 np0005545273 podman[248367]: 2025-12-04 10:37:59.60365936 +0000 UTC m=+0.150099024 container died 0a94fda662edbc9a771070e1f9325302805577be5d2b5c9d28649b2ed053210b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  4 05:37:59 np0005545273 systemd[1]: var-lib-containers-storage-overlay-d529858f511ac93dc1c58077fa50a44d8cbd035f153a5b51833fd1abad50ee7f-merged.mount: Deactivated successfully.
Dec  4 05:37:59 np0005545273 podman[248367]: 2025-12-04 10:37:59.6506984 +0000 UTC m=+0.197138034 container remove 0a94fda662edbc9a771070e1f9325302805577be5d2b5c9d28649b2ed053210b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_zhukovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  4 05:37:59 np0005545273 systemd[1]: libpod-conmon-0a94fda662edbc9a771070e1f9325302805577be5d2b5c9d28649b2ed053210b.scope: Deactivated successfully.
Dec  4 05:37:59 np0005545273 podman[248407]: 2025-12-04 10:37:59.812014283 +0000 UTC m=+0.049327209 container create f57f74ef9eb9df41883c0948dedddbb765a6eb15722506a0c121e67c4ca39d99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_jackson, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  4 05:37:59 np0005545273 systemd[1]: Started libpod-conmon-f57f74ef9eb9df41883c0948dedddbb765a6eb15722506a0c121e67c4ca39d99.scope.
Dec  4 05:37:59 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:37:59 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eabb39c294caa856a4ebb33410f816cbb2ac790cb670dac723bad34c85eb5c6b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:37:59 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eabb39c294caa856a4ebb33410f816cbb2ac790cb670dac723bad34c85eb5c6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:37:59 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eabb39c294caa856a4ebb33410f816cbb2ac790cb670dac723bad34c85eb5c6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:37:59 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eabb39c294caa856a4ebb33410f816cbb2ac790cb670dac723bad34c85eb5c6b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:37:59 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eabb39c294caa856a4ebb33410f816cbb2ac790cb670dac723bad34c85eb5c6b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:37:59 np0005545273 podman[248407]: 2025-12-04 10:37:59.875179914 +0000 UTC m=+0.112492880 container init f57f74ef9eb9df41883c0948dedddbb765a6eb15722506a0c121e67c4ca39d99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_jackson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:37:59 np0005545273 podman[248407]: 2025-12-04 10:37:59.882152177 +0000 UTC m=+0.119465113 container start f57f74ef9eb9df41883c0948dedddbb765a6eb15722506a0c121e67c4ca39d99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_jackson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  4 05:37:59 np0005545273 podman[248407]: 2025-12-04 10:37:59.885863369 +0000 UTC m=+0.123176305 container attach f57f74ef9eb9df41883c0948dedddbb765a6eb15722506a0c121e67c4ca39d99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_jackson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  4 05:37:59 np0005545273 podman[248407]: 2025-12-04 10:37:59.794843405 +0000 UTC m=+0.032156361 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:38:00 np0005545273 wizardly_jackson[248424]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:38:00 np0005545273 wizardly_jackson[248424]: --> All data devices are unavailable
Dec  4 05:38:00 np0005545273 systemd[1]: libpod-f57f74ef9eb9df41883c0948dedddbb765a6eb15722506a0c121e67c4ca39d99.scope: Deactivated successfully.
Dec  4 05:38:00 np0005545273 podman[248407]: 2025-12-04 10:38:00.325302579 +0000 UTC m=+0.562615515 container died f57f74ef9eb9df41883c0948dedddbb765a6eb15722506a0c121e67c4ca39d99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_jackson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:38:00 np0005545273 systemd[1]: var-lib-containers-storage-overlay-eabb39c294caa856a4ebb33410f816cbb2ac790cb670dac723bad34c85eb5c6b-merged.mount: Deactivated successfully.
Dec  4 05:38:00 np0005545273 podman[248407]: 2025-12-04 10:38:00.369853427 +0000 UTC m=+0.607166363 container remove f57f74ef9eb9df41883c0948dedddbb765a6eb15722506a0c121e67c4ca39d99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_jackson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:38:00 np0005545273 systemd[1]: libpod-conmon-f57f74ef9eb9df41883c0948dedddbb765a6eb15722506a0c121e67c4ca39d99.scope: Deactivated successfully.
Dec  4 05:38:00 np0005545273 podman[248518]: 2025-12-04 10:38:00.8435786 +0000 UTC m=+0.040782875 container create fb7f65e36c9a93f47657b4e9925053091405216dc943f337f472b991c5986723 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec  4 05:38:00 np0005545273 systemd[1]: Started libpod-conmon-fb7f65e36c9a93f47657b4e9925053091405216dc943f337f472b991c5986723.scope.
Dec  4 05:38:00 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:38:00 np0005545273 podman[248518]: 2025-12-04 10:38:00.91876289 +0000 UTC m=+0.115967175 container init fb7f65e36c9a93f47657b4e9925053091405216dc943f337f472b991c5986723 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec  4 05:38:00 np0005545273 podman[248518]: 2025-12-04 10:38:00.825229994 +0000 UTC m=+0.022434269 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:38:00 np0005545273 podman[248518]: 2025-12-04 10:38:00.93198236 +0000 UTC m=+0.129186625 container start fb7f65e36c9a93f47657b4e9925053091405216dc943f337f472b991c5986723 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ramanujan, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  4 05:38:00 np0005545273 podman[248518]: 2025-12-04 10:38:00.936491151 +0000 UTC m=+0.133695416 container attach fb7f65e36c9a93f47657b4e9925053091405216dc943f337f472b991c5986723 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ramanujan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec  4 05:38:00 np0005545273 dazzling_ramanujan[248534]: 167 167
Dec  4 05:38:00 np0005545273 systemd[1]: libpod-fb7f65e36c9a93f47657b4e9925053091405216dc943f337f472b991c5986723.scope: Deactivated successfully.
Dec  4 05:38:00 np0005545273 podman[248518]: 2025-12-04 10:38:00.941181498 +0000 UTC m=+0.138385763 container died fb7f65e36c9a93f47657b4e9925053091405216dc943f337f472b991c5986723 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ramanujan, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:38:00 np0005545273 systemd[1]: var-lib-containers-storage-overlay-82a600fac7b2c7ecc358b1e3caf7fb3697683fe07da5142975092eed770946e3-merged.mount: Deactivated successfully.
Dec  4 05:38:00 np0005545273 podman[248518]: 2025-12-04 10:38:00.9951204 +0000 UTC m=+0.192324665 container remove fb7f65e36c9a93f47657b4e9925053091405216dc943f337f472b991c5986723 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:38:01 np0005545273 systemd[1]: libpod-conmon-fb7f65e36c9a93f47657b4e9925053091405216dc943f337f472b991c5986723.scope: Deactivated successfully.
Dec  4 05:38:01 np0005545273 podman[248559]: 2025-12-04 10:38:01.156648307 +0000 UTC m=+0.046268482 container create f02edfe2fc48be16d587d20965abfb499d50a9b032623450fe666d77abdf1f1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_fermi, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  4 05:38:01 np0005545273 systemd[1]: Started libpod-conmon-f02edfe2fc48be16d587d20965abfb499d50a9b032623450fe666d77abdf1f1e.scope.
Dec  4 05:38:01 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:38:01 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a7478058a29ea7d4b7f871af0314645977da86a3ad35f8cc381a839aad04c21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:38:01 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a7478058a29ea7d4b7f871af0314645977da86a3ad35f8cc381a839aad04c21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:38:01 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a7478058a29ea7d4b7f871af0314645977da86a3ad35f8cc381a839aad04c21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:38:01 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a7478058a29ea7d4b7f871af0314645977da86a3ad35f8cc381a839aad04c21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:38:01 np0005545273 podman[248559]: 2025-12-04 10:38:01.137547602 +0000 UTC m=+0.027167797 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:38:01 np0005545273 podman[248559]: 2025-12-04 10:38:01.239134669 +0000 UTC m=+0.128754904 container init f02edfe2fc48be16d587d20965abfb499d50a9b032623450fe666d77abdf1f1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_fermi, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  4 05:38:01 np0005545273 podman[248559]: 2025-12-04 10:38:01.249475886 +0000 UTC m=+0.139096081 container start f02edfe2fc48be16d587d20965abfb499d50a9b032623450fe666d77abdf1f1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  4 05:38:01 np0005545273 podman[248559]: 2025-12-04 10:38:01.254000539 +0000 UTC m=+0.143620724 container attach f02edfe2fc48be16d587d20965abfb499d50a9b032623450fe666d77abdf1f1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_fermi, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  4 05:38:01 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v815: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:38:01 np0005545273 eager_fermi[248575]: {
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:    "0": [
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:        {
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            "devices": [
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "/dev/loop3"
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            ],
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            "lv_name": "ceph_lv0",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            "lv_size": "21470642176",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            "name": "ceph_lv0",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            "tags": {
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.cluster_name": "ceph",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.crush_device_class": "",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.encrypted": "0",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.objectstore": "bluestore",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.osd_id": "0",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.type": "block",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.vdo": "0",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.with_tpm": "0"
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            },
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            "type": "block",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            "vg_name": "ceph_vg0"
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:        }
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:    ],
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:    "1": [
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:        {
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            "devices": [
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "/dev/loop4"
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            ],
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            "lv_name": "ceph_lv1",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            "lv_size": "21470642176",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            "name": "ceph_lv1",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            "tags": {
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.cluster_name": "ceph",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.crush_device_class": "",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.encrypted": "0",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.objectstore": "bluestore",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.osd_id": "1",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.type": "block",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.vdo": "0",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.with_tpm": "0"
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            },
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            "type": "block",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            "vg_name": "ceph_vg1"
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:        }
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:    ],
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:    "2": [
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:        {
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            "devices": [
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "/dev/loop5"
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            ],
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            "lv_name": "ceph_lv2",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            "lv_size": "21470642176",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            "name": "ceph_lv2",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            "tags": {
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.cluster_name": "ceph",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.crush_device_class": "",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.encrypted": "0",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.objectstore": "bluestore",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.osd_id": "2",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.type": "block",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.vdo": "0",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:                "ceph.with_tpm": "0"
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            },
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            "type": "block",
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:            "vg_name": "ceph_vg2"
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:        }
Dec  4 05:38:01 np0005545273 eager_fermi[248575]:    ]
Dec  4 05:38:01 np0005545273 eager_fermi[248575]: }
Dec  4 05:38:01 np0005545273 systemd[1]: libpod-f02edfe2fc48be16d587d20965abfb499d50a9b032623450fe666d77abdf1f1e.scope: Deactivated successfully.
Dec  4 05:38:01 np0005545273 podman[248559]: 2025-12-04 10:38:01.554901802 +0000 UTC m=+0.444521957 container died f02edfe2fc48be16d587d20965abfb499d50a9b032623450fe666d77abdf1f1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:38:01 np0005545273 systemd[1]: var-lib-containers-storage-overlay-5a7478058a29ea7d4b7f871af0314645977da86a3ad35f8cc381a839aad04c21-merged.mount: Deactivated successfully.
Dec  4 05:38:01 np0005545273 podman[248559]: 2025-12-04 10:38:01.5926123 +0000 UTC m=+0.482232455 container remove f02edfe2fc48be16d587d20965abfb499d50a9b032623450fe666d77abdf1f1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_fermi, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Dec  4 05:38:01 np0005545273 systemd[1]: libpod-conmon-f02edfe2fc48be16d587d20965abfb499d50a9b032623450fe666d77abdf1f1e.scope: Deactivated successfully.
Dec  4 05:38:02 np0005545273 podman[248658]: 2025-12-04 10:38:02.088450443 +0000 UTC m=+0.064004283 container create ab8859b9acc4aaaaa2e92016e3253e2fbfed23d19a51f38aadf7b84efecf9dc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_rhodes, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:38:02 np0005545273 podman[248658]: 2025-12-04 10:38:02.049355821 +0000 UTC m=+0.024909561 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:38:02 np0005545273 systemd[1]: Started libpod-conmon-ab8859b9acc4aaaaa2e92016e3253e2fbfed23d19a51f38aadf7b84efecf9dc9.scope.
Dec  4 05:38:02 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:38:02 np0005545273 podman[248658]: 2025-12-04 10:38:02.21016643 +0000 UTC m=+0.185720150 container init ab8859b9acc4aaaaa2e92016e3253e2fbfed23d19a51f38aadf7b84efecf9dc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_rhodes, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec  4 05:38:02 np0005545273 podman[248658]: 2025-12-04 10:38:02.216442616 +0000 UTC m=+0.191996336 container start ab8859b9acc4aaaaa2e92016e3253e2fbfed23d19a51f38aadf7b84efecf9dc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_rhodes, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:38:02 np0005545273 podman[248658]: 2025-12-04 10:38:02.220242241 +0000 UTC m=+0.195795981 container attach ab8859b9acc4aaaaa2e92016e3253e2fbfed23d19a51f38aadf7b84efecf9dc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec  4 05:38:02 np0005545273 modest_rhodes[248674]: 167 167
Dec  4 05:38:02 np0005545273 systemd[1]: libpod-ab8859b9acc4aaaaa2e92016e3253e2fbfed23d19a51f38aadf7b84efecf9dc9.scope: Deactivated successfully.
Dec  4 05:38:02 np0005545273 podman[248658]: 2025-12-04 10:38:02.222409185 +0000 UTC m=+0.197962915 container died ab8859b9acc4aaaaa2e92016e3253e2fbfed23d19a51f38aadf7b84efecf9dc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec  4 05:38:02 np0005545273 systemd[1]: var-lib-containers-storage-overlay-fb0e01e663b8ba6ffca569b4b0b3fd79f222512fe7b69bf06f124db7041594f1-merged.mount: Deactivated successfully.
Dec  4 05:38:02 np0005545273 podman[248658]: 2025-12-04 10:38:02.262635915 +0000 UTC m=+0.238189675 container remove ab8859b9acc4aaaaa2e92016e3253e2fbfed23d19a51f38aadf7b84efecf9dc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  4 05:38:02 np0005545273 systemd[1]: libpod-conmon-ab8859b9acc4aaaaa2e92016e3253e2fbfed23d19a51f38aadf7b84efecf9dc9.scope: Deactivated successfully.
Dec  4 05:38:02 np0005545273 podman[248699]: 2025-12-04 10:38:02.424809659 +0000 UTC m=+0.045161764 container create 727038a27db6d37296cd7ae6aab02660708094286274f1bffbbf2b5c36467d05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_curie, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  4 05:38:02 np0005545273 systemd[1]: Started libpod-conmon-727038a27db6d37296cd7ae6aab02660708094286274f1bffbbf2b5c36467d05.scope.
Dec  4 05:38:02 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:38:02 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47c92f5ab0b5d107b615ce0399773cc9b3fcf03b1f16948ef2cf5a9b36cba878/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:38:02 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47c92f5ab0b5d107b615ce0399773cc9b3fcf03b1f16948ef2cf5a9b36cba878/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:38:02 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47c92f5ab0b5d107b615ce0399773cc9b3fcf03b1f16948ef2cf5a9b36cba878/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:38:02 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47c92f5ab0b5d107b615ce0399773cc9b3fcf03b1f16948ef2cf5a9b36cba878/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:38:02 np0005545273 podman[248699]: 2025-12-04 10:38:02.403301834 +0000 UTC m=+0.023653969 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:38:02 np0005545273 podman[248699]: 2025-12-04 10:38:02.508558063 +0000 UTC m=+0.128910178 container init 727038a27db6d37296cd7ae6aab02660708094286274f1bffbbf2b5c36467d05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_curie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec  4 05:38:02 np0005545273 podman[248699]: 2025-12-04 10:38:02.520822547 +0000 UTC m=+0.141174672 container start 727038a27db6d37296cd7ae6aab02660708094286274f1bffbbf2b5c36467d05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_curie, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:38:02 np0005545273 podman[248699]: 2025-12-04 10:38:02.524522699 +0000 UTC m=+0.144874824 container attach 727038a27db6d37296cd7ae6aab02660708094286274f1bffbbf2b5c36467d05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_curie, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  4 05:38:03 np0005545273 lvm[248795]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:38:03 np0005545273 lvm[248794]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:38:03 np0005545273 lvm[248795]: VG ceph_vg1 finished
Dec  4 05:38:03 np0005545273 lvm[248794]: VG ceph_vg0 finished
Dec  4 05:38:03 np0005545273 lvm[248797]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:38:03 np0005545273 lvm[248797]: VG ceph_vg2 finished
Dec  4 05:38:03 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v816: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:38:03 np0005545273 happy_curie[248716]: {}
Dec  4 05:38:03 np0005545273 systemd[1]: libpod-727038a27db6d37296cd7ae6aab02660708094286274f1bffbbf2b5c36467d05.scope: Deactivated successfully.
Dec  4 05:38:03 np0005545273 systemd[1]: libpod-727038a27db6d37296cd7ae6aab02660708094286274f1bffbbf2b5c36467d05.scope: Consumed 1.298s CPU time.
Dec  4 05:38:03 np0005545273 podman[248699]: 2025-12-04 10:38:03.343574391 +0000 UTC m=+0.963926506 container died 727038a27db6d37296cd7ae6aab02660708094286274f1bffbbf2b5c36467d05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_curie, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec  4 05:38:03 np0005545273 systemd[1]: var-lib-containers-storage-overlay-47c92f5ab0b5d107b615ce0399773cc9b3fcf03b1f16948ef2cf5a9b36cba878-merged.mount: Deactivated successfully.
Dec  4 05:38:03 np0005545273 podman[248699]: 2025-12-04 10:38:03.388502259 +0000 UTC m=+1.008854354 container remove 727038a27db6d37296cd7ae6aab02660708094286274f1bffbbf2b5c36467d05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_curie, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec  4 05:38:03 np0005545273 systemd[1]: libpod-conmon-727038a27db6d37296cd7ae6aab02660708094286274f1bffbbf2b5c36467d05.scope: Deactivated successfully.
Dec  4 05:38:03 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:38:03 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:38:03 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:38:03 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:38:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:38:04 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:38:04 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:38:04 np0005545273 podman[248839]: 2025-12-04 10:38:04.951525164 +0000 UTC m=+0.054413453 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  4 05:38:04 np0005545273 podman[248838]: 2025-12-04 10:38:04.984998257 +0000 UTC m=+0.088741628 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  4 05:38:05 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v817: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:38:07 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v818: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:38:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:38:09 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v819: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:38:11 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v820: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:38:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  4 05:38:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1913347555' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec  4 05:38:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  4 05:38:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1913347555' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec  4 05:38:13 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v821: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:38:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:38:15 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v822: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:38:17 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v823: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:38:19 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:38:19 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v824: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:38:21 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v825: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:38:23 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v826: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:38:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:38:25 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v827: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:38:25 np0005545273 podman[248890]: 2025-12-04 10:38:25.325481909 +0000 UTC m=+0.062610219 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  4 05:38:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:38:26
Dec  4 05:38:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:38:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:38:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'backups', 'default.rgw.meta', 'default.rgw.control', 'volumes', '.rgw.root', 'images', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr']
Dec  4 05:38:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:38:27 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v828: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:38:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:38:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:38:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:38:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:38:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:38:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:38:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:38:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:38:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:38:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:38:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:38:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:38:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:38:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:38:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:38:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:38:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:38:29 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v829: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:38:31 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v830: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:38:33 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v831: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:38:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:38:35 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v832: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:38:35 np0005545273 podman[248925]: 2025-12-04 10:38:35.943564037 +0000 UTC m=+0.053099622 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  4 05:38:35 np0005545273 podman[248924]: 2025-12-04 10:38:35.978291691 +0000 UTC m=+0.090486872 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:38:36 np0005545273 nova_compute[244644]: 2025-12-04 10:38:36.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:38:36 np0005545273 nova_compute[244644]: 2025-12-04 10:38:36.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  4 05:38:36 np0005545273 nova_compute[244644]: 2025-12-04 10:38:36.361 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  4 05:38:36 np0005545273 nova_compute[244644]: 2025-12-04 10:38:36.363 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:38:36 np0005545273 nova_compute[244644]: 2025-12-04 10:38:36.363 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  4 05:38:36 np0005545273 nova_compute[244644]: 2025-12-04 10:38:36.380 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:38:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:38:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:38:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:38:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:38:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:38:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:38:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:38:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:38:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:38:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:38:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:38:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:38:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.083914171852171e-06 of space, bias 4.0, pg target 0.0013006970062226053 quantized to 16 (current 32)
Dec  4 05:38:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:38:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:38:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:38:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:38:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:38:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 05:38:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:38:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:38:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:38:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 05:38:37 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v833: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:38:38 np0005545273 nova_compute[244644]: 2025-12-04 10:38:38.396 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:38:38 np0005545273 nova_compute[244644]: 2025-12-04 10:38:38.396 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  4 05:38:38 np0005545273 nova_compute[244644]: 2025-12-04 10:38:38.397 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  4 05:38:38 np0005545273 nova_compute[244644]: 2025-12-04 10:38:38.413 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  4 05:38:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:38:39 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v834: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:38:39 np0005545273 nova_compute[244644]: 2025-12-04 10:38:39.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:38:39 np0005545273 nova_compute[244644]: 2025-12-04 10:38:39.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:38:39 np0005545273 nova_compute[244644]: 2025-12-04 10:38:39.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:38:39 np0005545273 nova_compute[244644]: 2025-12-04 10:38:39.366 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:38:39 np0005545273 nova_compute[244644]: 2025-12-04 10:38:39.366 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:38:39 np0005545273 nova_compute[244644]: 2025-12-04 10:38:39.367 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:38:39 np0005545273 nova_compute[244644]: 2025-12-04 10:38:39.368 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  4 05:38:39 np0005545273 nova_compute[244644]: 2025-12-04 10:38:39.368 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:38:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:38:39 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2230826318' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:38:39 np0005545273 nova_compute[244644]: 2025-12-04 10:38:39.912 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:38:40 np0005545273 nova_compute[244644]: 2025-12-04 10:38:40.074 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  4 05:38:40 np0005545273 nova_compute[244644]: 2025-12-04 10:38:40.075 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5140MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  4 05:38:40 np0005545273 nova_compute[244644]: 2025-12-04 10:38:40.075 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:38:40 np0005545273 nova_compute[244644]: 2025-12-04 10:38:40.075 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:38:40 np0005545273 nova_compute[244644]: 2025-12-04 10:38:40.287 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  4 05:38:40 np0005545273 nova_compute[244644]: 2025-12-04 10:38:40.288 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  4 05:38:40 np0005545273 nova_compute[244644]: 2025-12-04 10:38:40.364 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Refreshing inventories for resource provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  4 05:38:40 np0005545273 nova_compute[244644]: 2025-12-04 10:38:40.443 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Updating ProviderTree inventory for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  4 05:38:40 np0005545273 nova_compute[244644]: 2025-12-04 10:38:40.444 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Updating inventory in ProviderTree for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  4 05:38:40 np0005545273 nova_compute[244644]: 2025-12-04 10:38:40.459 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Refreshing aggregate associations for resource provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  4 05:38:40 np0005545273 nova_compute[244644]: 2025-12-04 10:38:40.481 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Refreshing trait associations for resource provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f, traits: COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE,HW_CPU_X86_ABM,HW_CPU_X86_F16C,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AVX2,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_FMA3,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  4 05:38:40 np0005545273 nova_compute[244644]: 2025-12-04 10:38:40.499 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:38:41 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:38:41 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3905714256' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:38:41 np0005545273 nova_compute[244644]: 2025-12-04 10:38:41.064 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:38:41 np0005545273 nova_compute[244644]: 2025-12-04 10:38:41.070 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  4 05:38:41 np0005545273 nova_compute[244644]: 2025-12-04 10:38:41.148 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  4 05:38:41 np0005545273 nova_compute[244644]: 2025-12-04 10:38:41.150 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  4 05:38:41 np0005545273 nova_compute[244644]: 2025-12-04 10:38:41.150 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.075s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:38:41 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v835: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:38:42 np0005545273 nova_compute[244644]: 2025-12-04 10:38:42.145 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:38:42 np0005545273 nova_compute[244644]: 2025-12-04 10:38:42.146 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:38:42 np0005545273 nova_compute[244644]: 2025-12-04 10:38:42.146 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:38:42 np0005545273 nova_compute[244644]: 2025-12-04 10:38:42.146 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:38:42 np0005545273 nova_compute[244644]: 2025-12-04 10:38:42.146 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:38:42 np0005545273 nova_compute[244644]: 2025-12-04 10:38:42.146 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  4 05:38:43 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v836: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:38:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:38:45 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v837: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:38:47 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v838: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:38:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:38:49 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v839: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:38:50 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Dec  4 05:38:50 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Dec  4 05:38:50 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Dec  4 05:38:51 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v841: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:38:51 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Dec  4 05:38:51 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Dec  4 05:38:51 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Dec  4 05:38:52 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Dec  4 05:38:52 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Dec  4 05:38:52 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Dec  4 05:38:53 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v844: 321 pgs: 321 active+clean; 16 MiB data, 152 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 2.7 MiB/s wr, 10 op/s
Dec  4 05:38:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:38:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Dec  4 05:38:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Dec  4 05:38:54 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Dec  4 05:38:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:38:54.904 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:38:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:38:54.905 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:38:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:38:54.905 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:38:55 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v846: 321 pgs: 321 active+clean; 16 MiB data, 152 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 3.3 MiB/s wr, 13 op/s
Dec  4 05:38:55 np0005545273 podman[249019]: 2025-12-04 10:38:55.952328957 +0000 UTC m=+0.060429444 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  4 05:38:57 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v847: 321 pgs: 321 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 3.4 MiB/s wr, 40 op/s
Dec  4 05:38:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:38:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:38:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:38:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:38:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:38:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:38:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:38:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Dec  4 05:38:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Dec  4 05:38:59 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Dec  4 05:38:59 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v849: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 5.9 MiB/s wr, 55 op/s
Dec  4 05:39:01 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v850: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 3.1 MiB/s wr, 39 op/s
Dec  4 05:39:03 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v851: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.8 MiB/s wr, 35 op/s
Dec  4 05:39:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:39:04 np0005545273 podman[249134]: 2025-12-04 10:39:04.145928324 +0000 UTC m=+0.076293108 container exec 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:39:04 np0005545273 podman[249134]: 2025-12-04 10:39:04.259543468 +0000 UTC m=+0.189908232 container exec_died 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:39:05 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:39:05 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:39:05 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:39:05 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:39:05 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v852: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.5 MiB/s wr, 31 op/s
Dec  4 05:39:05 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:39:05 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:39:05 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:39:05 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:39:05 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:39:05 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:39:05 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:39:05 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:39:05 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:39:05 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:39:05 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:39:05 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:39:06 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:39:06 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:39:06 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:39:06 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:39:06 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:39:06 np0005545273 podman[249466]: 2025-12-04 10:39:06.195640571 +0000 UTC m=+0.068903695 container create 521c168b40fd2f8a23d2db73435b10f2b1d848e19e30a66aaa1f7a953c9426b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_leakey, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:39:06 np0005545273 systemd[1]: Started libpod-conmon-521c168b40fd2f8a23d2db73435b10f2b1d848e19e30a66aaa1f7a953c9426b8.scope.
Dec  4 05:39:06 np0005545273 podman[249466]: 2025-12-04 10:39:06.163714196 +0000 UTC m=+0.036977410 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:39:06 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:39:06 np0005545273 podman[249466]: 2025-12-04 10:39:06.311878211 +0000 UTC m=+0.185141355 container init 521c168b40fd2f8a23d2db73435b10f2b1d848e19e30a66aaa1f7a953c9426b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_leakey, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:39:06 np0005545273 podman[249466]: 2025-12-04 10:39:06.325258409 +0000 UTC m=+0.198521523 container start 521c168b40fd2f8a23d2db73435b10f2b1d848e19e30a66aaa1f7a953c9426b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Dec  4 05:39:06 np0005545273 friendly_leakey[249489]: 167 167
Dec  4 05:39:06 np0005545273 systemd[1]: libpod-521c168b40fd2f8a23d2db73435b10f2b1d848e19e30a66aaa1f7a953c9426b8.scope: Deactivated successfully.
Dec  4 05:39:06 np0005545273 podman[249466]: 2025-12-04 10:39:06.333226825 +0000 UTC m=+0.206489939 container attach 521c168b40fd2f8a23d2db73435b10f2b1d848e19e30a66aaa1f7a953c9426b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_leakey, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec  4 05:39:06 np0005545273 podman[249466]: 2025-12-04 10:39:06.334434025 +0000 UTC m=+0.207697139 container died 521c168b40fd2f8a23d2db73435b10f2b1d848e19e30a66aaa1f7a953c9426b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_leakey, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:39:06 np0005545273 podman[249480]: 2025-12-04 10:39:06.343144239 +0000 UTC m=+0.103850865 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:39:06 np0005545273 podman[249483]: 2025-12-04 10:39:06.349140996 +0000 UTC m=+0.095913309 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:39:06 np0005545273 systemd[1]: var-lib-containers-storage-overlay-5adfe84f0bfd228926324591f8c9939ef40d3e3cef7420aff8639807c5ffc0e0-merged.mount: Deactivated successfully.
Dec  4 05:39:06 np0005545273 podman[249466]: 2025-12-04 10:39:06.381280607 +0000 UTC m=+0.254543721 container remove 521c168b40fd2f8a23d2db73435b10f2b1d848e19e30a66aaa1f7a953c9426b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_leakey, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  4 05:39:06 np0005545273 systemd[1]: libpod-conmon-521c168b40fd2f8a23d2db73435b10f2b1d848e19e30a66aaa1f7a953c9426b8.scope: Deactivated successfully.
Dec  4 05:39:06 np0005545273 podman[249552]: 2025-12-04 10:39:06.557466971 +0000 UTC m=+0.045991823 container create 198b3ca392f76fde39cbd6351dd717a8510d06863d03a6d6008ee3ddec9b7912 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  4 05:39:06 np0005545273 systemd[1]: Started libpod-conmon-198b3ca392f76fde39cbd6351dd717a8510d06863d03a6d6008ee3ddec9b7912.scope.
Dec  4 05:39:06 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:39:06 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40442bb086f19028960ef262c617407f7ac15af2f35cb43735855089f5e26626/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:39:06 np0005545273 podman[249552]: 2025-12-04 10:39:06.536199708 +0000 UTC m=+0.024724580 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:39:06 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40442bb086f19028960ef262c617407f7ac15af2f35cb43735855089f5e26626/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:39:06 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40442bb086f19028960ef262c617407f7ac15af2f35cb43735855089f5e26626/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:39:06 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40442bb086f19028960ef262c617407f7ac15af2f35cb43735855089f5e26626/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:39:06 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40442bb086f19028960ef262c617407f7ac15af2f35cb43735855089f5e26626/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:39:06 np0005545273 podman[249552]: 2025-12-04 10:39:06.65132233 +0000 UTC m=+0.139847202 container init 198b3ca392f76fde39cbd6351dd717a8510d06863d03a6d6008ee3ddec9b7912 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec  4 05:39:06 np0005545273 podman[249552]: 2025-12-04 10:39:06.658359773 +0000 UTC m=+0.146884625 container start 198b3ca392f76fde39cbd6351dd717a8510d06863d03a6d6008ee3ddec9b7912 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  4 05:39:06 np0005545273 podman[249552]: 2025-12-04 10:39:06.663364886 +0000 UTC m=+0.151889738 container attach 198b3ca392f76fde39cbd6351dd717a8510d06863d03a6d6008ee3ddec9b7912 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:39:07 np0005545273 charming_feynman[249569]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:39:07 np0005545273 charming_feynman[249569]: --> All data devices are unavailable
Dec  4 05:39:07 np0005545273 systemd[1]: libpod-198b3ca392f76fde39cbd6351dd717a8510d06863d03a6d6008ee3ddec9b7912.scope: Deactivated successfully.
Dec  4 05:39:07 np0005545273 podman[249552]: 2025-12-04 10:39:07.161923869 +0000 UTC m=+0.650448721 container died 198b3ca392f76fde39cbd6351dd717a8510d06863d03a6d6008ee3ddec9b7912 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:39:07 np0005545273 systemd[1]: var-lib-containers-storage-overlay-40442bb086f19028960ef262c617407f7ac15af2f35cb43735855089f5e26626-merged.mount: Deactivated successfully.
Dec  4 05:39:07 np0005545273 podman[249552]: 2025-12-04 10:39:07.213670582 +0000 UTC m=+0.702195444 container remove 198b3ca392f76fde39cbd6351dd717a8510d06863d03a6d6008ee3ddec9b7912 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:39:07 np0005545273 systemd[1]: libpod-conmon-198b3ca392f76fde39cbd6351dd717a8510d06863d03a6d6008ee3ddec9b7912.scope: Deactivated successfully.
Dec  4 05:39:07 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v853: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 2.0 MiB/s wr, 13 op/s
Dec  4 05:39:07 np0005545273 podman[249664]: 2025-12-04 10:39:07.711472616 +0000 UTC m=+0.052790409 container create 6268159a34e63af820504bab7c827f4631369dfb65e6c635b3729f3a3e63c70c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_nightingale, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:39:07 np0005545273 systemd[1]: Started libpod-conmon-6268159a34e63af820504bab7c827f4631369dfb65e6c635b3729f3a3e63c70c.scope.
Dec  4 05:39:07 np0005545273 podman[249664]: 2025-12-04 10:39:07.688138653 +0000 UTC m=+0.029456496 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:39:07 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:39:07 np0005545273 podman[249664]: 2025-12-04 10:39:07.807598141 +0000 UTC m=+0.148915984 container init 6268159a34e63af820504bab7c827f4631369dfb65e6c635b3729f3a3e63c70c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:39:07 np0005545273 podman[249664]: 2025-12-04 10:39:07.817007503 +0000 UTC m=+0.158325336 container start 6268159a34e63af820504bab7c827f4631369dfb65e6c635b3729f3a3e63c70c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:39:07 np0005545273 podman[249664]: 2025-12-04 10:39:07.821693048 +0000 UTC m=+0.163010881 container attach 6268159a34e63af820504bab7c827f4631369dfb65e6c635b3729f3a3e63c70c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_nightingale, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec  4 05:39:07 np0005545273 hungry_nightingale[249680]: 167 167
Dec  4 05:39:07 np0005545273 systemd[1]: libpod-6268159a34e63af820504bab7c827f4631369dfb65e6c635b3729f3a3e63c70c.scope: Deactivated successfully.
Dec  4 05:39:07 np0005545273 podman[249664]: 2025-12-04 10:39:07.825130643 +0000 UTC m=+0.166448446 container died 6268159a34e63af820504bab7c827f4631369dfb65e6c635b3729f3a3e63c70c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_nightingale, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:39:07 np0005545273 systemd[1]: var-lib-containers-storage-overlay-896993f541bc9533cf7d720d451ab6f6ba7fddfaf643b9b5e7caae6e7ef737ee-merged.mount: Deactivated successfully.
Dec  4 05:39:07 np0005545273 podman[249664]: 2025-12-04 10:39:07.875748768 +0000 UTC m=+0.217066561 container remove 6268159a34e63af820504bab7c827f4631369dfb65e6c635b3729f3a3e63c70c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_nightingale, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  4 05:39:07 np0005545273 systemd[1]: libpod-conmon-6268159a34e63af820504bab7c827f4631369dfb65e6c635b3729f3a3e63c70c.scope: Deactivated successfully.
Dec  4 05:39:08 np0005545273 podman[249704]: 2025-12-04 10:39:08.059310543 +0000 UTC m=+0.044345662 container create abc54992f75b36ca377debd1b00d30d16d8f5616f6ae9992fc5032d59d5ae866 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_banzai, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:39:08 np0005545273 systemd[1]: Started libpod-conmon-abc54992f75b36ca377debd1b00d30d16d8f5616f6ae9992fc5032d59d5ae866.scope.
Dec  4 05:39:08 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:39:08 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e2cfc2c721a23a7ff683840609ba196ce8a996d1354ad2051e2cb98ad2a4596/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:39:08 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e2cfc2c721a23a7ff683840609ba196ce8a996d1354ad2051e2cb98ad2a4596/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:39:08 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e2cfc2c721a23a7ff683840609ba196ce8a996d1354ad2051e2cb98ad2a4596/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:39:08 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e2cfc2c721a23a7ff683840609ba196ce8a996d1354ad2051e2cb98ad2a4596/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:39:08 np0005545273 podman[249704]: 2025-12-04 10:39:08.041036603 +0000 UTC m=+0.026071732 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:39:08 np0005545273 podman[249704]: 2025-12-04 10:39:08.146396945 +0000 UTC m=+0.131432084 container init abc54992f75b36ca377debd1b00d30d16d8f5616f6ae9992fc5032d59d5ae866 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_banzai, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec  4 05:39:08 np0005545273 podman[249704]: 2025-12-04 10:39:08.15430986 +0000 UTC m=+0.139344979 container start abc54992f75b36ca377debd1b00d30d16d8f5616f6ae9992fc5032d59d5ae866 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_banzai, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:39:08 np0005545273 podman[249704]: 2025-12-04 10:39:08.158547404 +0000 UTC m=+0.143582543 container attach abc54992f75b36ca377debd1b00d30d16d8f5616f6ae9992fc5032d59d5ae866 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_banzai, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:39:08 np0005545273 festive_banzai[249721]: {
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:    "0": [
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:        {
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            "devices": [
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "/dev/loop3"
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            ],
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            "lv_name": "ceph_lv0",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            "lv_size": "21470642176",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            "name": "ceph_lv0",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            "tags": {
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.cluster_name": "ceph",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.crush_device_class": "",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.encrypted": "0",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.objectstore": "bluestore",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.osd_id": "0",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.type": "block",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.vdo": "0",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.with_tpm": "0"
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            },
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            "type": "block",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            "vg_name": "ceph_vg0"
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:        }
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:    ],
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:    "1": [
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:        {
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            "devices": [
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "/dev/loop4"
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            ],
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            "lv_name": "ceph_lv1",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            "lv_size": "21470642176",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            "name": "ceph_lv1",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            "tags": {
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.cluster_name": "ceph",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.crush_device_class": "",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.encrypted": "0",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.objectstore": "bluestore",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.osd_id": "1",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.type": "block",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.vdo": "0",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.with_tpm": "0"
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            },
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            "type": "block",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            "vg_name": "ceph_vg1"
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:        }
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:    ],
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:    "2": [
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:        {
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            "devices": [
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "/dev/loop5"
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            ],
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            "lv_name": "ceph_lv2",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            "lv_size": "21470642176",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            "name": "ceph_lv2",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            "tags": {
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.cluster_name": "ceph",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.crush_device_class": "",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.encrypted": "0",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.objectstore": "bluestore",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.osd_id": "2",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.type": "block",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.vdo": "0",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:                "ceph.with_tpm": "0"
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            },
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            "type": "block",
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:            "vg_name": "ceph_vg2"
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:        }
Dec  4 05:39:08 np0005545273 festive_banzai[249721]:    ]
Dec  4 05:39:08 np0005545273 festive_banzai[249721]: }
Dec  4 05:39:08 np0005545273 systemd[1]: libpod-abc54992f75b36ca377debd1b00d30d16d8f5616f6ae9992fc5032d59d5ae866.scope: Deactivated successfully.
Dec  4 05:39:08 np0005545273 podman[249704]: 2025-12-04 10:39:08.496146958 +0000 UTC m=+0.481182087 container died abc54992f75b36ca377debd1b00d30d16d8f5616f6ae9992fc5032d59d5ae866 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_banzai, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec  4 05:39:08 np0005545273 systemd[1]: var-lib-containers-storage-overlay-1e2cfc2c721a23a7ff683840609ba196ce8a996d1354ad2051e2cb98ad2a4596-merged.mount: Deactivated successfully.
Dec  4 05:39:08 np0005545273 podman[249704]: 2025-12-04 10:39:08.558421389 +0000 UTC m=+0.543456528 container remove abc54992f75b36ca377debd1b00d30d16d8f5616f6ae9992fc5032d59d5ae866 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_banzai, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  4 05:39:08 np0005545273 systemd[1]: libpod-conmon-abc54992f75b36ca377debd1b00d30d16d8f5616f6ae9992fc5032d59d5ae866.scope: Deactivated successfully.
Dec  4 05:39:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:39:09 np0005545273 podman[249804]: 2025-12-04 10:39:09.097154561 +0000 UTC m=+0.043435900 container create d85976d595b47aec976f0906b1298768a81a60b28c19c896409b7619e19da0a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bhabha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec  4 05:39:09 np0005545273 systemd[1]: Started libpod-conmon-d85976d595b47aec976f0906b1298768a81a60b28c19c896409b7619e19da0a4.scope.
Dec  4 05:39:09 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:39:09 np0005545273 podman[249804]: 2025-12-04 10:39:09.078861981 +0000 UTC m=+0.025143330 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:39:09 np0005545273 podman[249804]: 2025-12-04 10:39:09.175662022 +0000 UTC m=+0.121943381 container init d85976d595b47aec976f0906b1298768a81a60b28c19c896409b7619e19da0a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec  4 05:39:09 np0005545273 podman[249804]: 2025-12-04 10:39:09.186842037 +0000 UTC m=+0.133123366 container start d85976d595b47aec976f0906b1298768a81a60b28c19c896409b7619e19da0a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bhabha, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  4 05:39:09 np0005545273 podman[249804]: 2025-12-04 10:39:09.191001059 +0000 UTC m=+0.137282408 container attach d85976d595b47aec976f0906b1298768a81a60b28c19c896409b7619e19da0a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bhabha, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  4 05:39:09 np0005545273 wonderful_bhabha[249821]: 167 167
Dec  4 05:39:09 np0005545273 systemd[1]: libpod-d85976d595b47aec976f0906b1298768a81a60b28c19c896409b7619e19da0a4.scope: Deactivated successfully.
Dec  4 05:39:09 np0005545273 podman[249804]: 2025-12-04 10:39:09.193818488 +0000 UTC m=+0.140099817 container died d85976d595b47aec976f0906b1298768a81a60b28c19c896409b7619e19da0a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bhabha, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:39:09 np0005545273 systemd[1]: var-lib-containers-storage-overlay-0be92daa41baaf46fbce1f3fd48b57712c6090a91893f590f8b0b60d8f30ae5d-merged.mount: Deactivated successfully.
Dec  4 05:39:09 np0005545273 podman[249804]: 2025-12-04 10:39:09.230783038 +0000 UTC m=+0.177064367 container remove d85976d595b47aec976f0906b1298768a81a60b28c19c896409b7619e19da0a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bhabha, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:39:09 np0005545273 systemd[1]: libpod-conmon-d85976d595b47aec976f0906b1298768a81a60b28c19c896409b7619e19da0a4.scope: Deactivated successfully.
Dec  4 05:39:09 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v854: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:39:09 np0005545273 podman[249844]: 2025-12-04 10:39:09.412131458 +0000 UTC m=+0.049850097 container create bb86a1f62db0864c00815ac245e8bb37b8eb1855f1eadfbf5396db058c9bd98b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:39:09 np0005545273 systemd[1]: Started libpod-conmon-bb86a1f62db0864c00815ac245e8bb37b8eb1855f1eadfbf5396db058c9bd98b.scope.
Dec  4 05:39:09 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:39:09 np0005545273 podman[249844]: 2025-12-04 10:39:09.391703856 +0000 UTC m=+0.029422545 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:39:09 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a260db393ef73de73ab734ecca9df7b21255a9aba35361d9395d471c79d19993/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:39:09 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a260db393ef73de73ab734ecca9df7b21255a9aba35361d9395d471c79d19993/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:39:09 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a260db393ef73de73ab734ecca9df7b21255a9aba35361d9395d471c79d19993/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:39:09 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a260db393ef73de73ab734ecca9df7b21255a9aba35361d9395d471c79d19993/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:39:09 np0005545273 podman[249844]: 2025-12-04 10:39:09.555708911 +0000 UTC m=+0.193427570 container init bb86a1f62db0864c00815ac245e8bb37b8eb1855f1eadfbf5396db058c9bd98b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_curran, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:39:09 np0005545273 podman[249844]: 2025-12-04 10:39:09.562372634 +0000 UTC m=+0.200091273 container start bb86a1f62db0864c00815ac245e8bb37b8eb1855f1eadfbf5396db058c9bd98b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec  4 05:39:09 np0005545273 podman[249844]: 2025-12-04 10:39:09.566370212 +0000 UTC m=+0.204088881 container attach bb86a1f62db0864c00815ac245e8bb37b8eb1855f1eadfbf5396db058c9bd98b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:39:10 np0005545273 lvm[249939]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:39:10 np0005545273 lvm[249942]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:39:10 np0005545273 lvm[249942]: VG ceph_vg2 finished
Dec  4 05:39:10 np0005545273 lvm[249939]: VG ceph_vg0 finished
Dec  4 05:39:10 np0005545273 lvm[249940]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:39:10 np0005545273 lvm[249940]: VG ceph_vg1 finished
Dec  4 05:39:10 np0005545273 mystifying_curran[249860]: {}
Dec  4 05:39:10 np0005545273 systemd[1]: libpod-bb86a1f62db0864c00815ac245e8bb37b8eb1855f1eadfbf5396db058c9bd98b.scope: Deactivated successfully.
Dec  4 05:39:10 np0005545273 podman[249844]: 2025-12-04 10:39:10.48792182 +0000 UTC m=+1.125640459 container died bb86a1f62db0864c00815ac245e8bb37b8eb1855f1eadfbf5396db058c9bd98b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec  4 05:39:10 np0005545273 systemd[1]: libpod-bb86a1f62db0864c00815ac245e8bb37b8eb1855f1eadfbf5396db058c9bd98b.scope: Consumed 1.533s CPU time.
Dec  4 05:39:10 np0005545273 systemd[1]: var-lib-containers-storage-overlay-a260db393ef73de73ab734ecca9df7b21255a9aba35361d9395d471c79d19993-merged.mount: Deactivated successfully.
Dec  4 05:39:10 np0005545273 podman[249844]: 2025-12-04 10:39:10.536682739 +0000 UTC m=+1.174401368 container remove bb86a1f62db0864c00815ac245e8bb37b8eb1855f1eadfbf5396db058c9bd98b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_curran, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec  4 05:39:10 np0005545273 systemd[1]: libpod-conmon-bb86a1f62db0864c00815ac245e8bb37b8eb1855f1eadfbf5396db058c9bd98b.scope: Deactivated successfully.
Dec  4 05:39:10 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:39:10 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:39:10 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:39:10 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:39:11 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v855: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:39:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  4 05:39:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3674658812' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec  4 05:39:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  4 05:39:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3674658812' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec  4 05:39:11 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:39:11 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:39:11 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:39:11.977 156095 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'aa:78:67', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:d2:c7:24:ee:78'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  4 05:39:11 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:39:11.980 156095 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  4 05:39:12 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:39:12 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:39:12 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:39:12 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:39:12.591+0000 7f8423c95640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:39:12 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/dfcd5d86-8b04-4c9e-b7fc-a8b3dfe0eeb4'.
Dec  4 05:39:12 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp'
Dec  4 05:39:12 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp' to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta'
Dec  4 05:39:12 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:39:12 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "format": "json"}]: dispatch
Dec  4 05:39:12 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:39:12 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:39:12 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:39:12 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:39:13 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v856: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:39:13 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e2e2f6bb-d3cb-4e49-9c72-447ac26e9630", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:39:13 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e2e2f6bb-d3cb-4e49-9c72-447ac26e9630, vol_name:cephfs) < ""
Dec  4 05:39:13 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/e2e2f6bb-d3cb-4e49-9c72-447ac26e9630/ded1c69d-60a5-4683-b853-47a6a2331bac'.
Dec  4 05:39:13 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e2e2f6bb-d3cb-4e49-9c72-447ac26e9630/.meta.tmp'
Dec  4 05:39:13 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e2e2f6bb-d3cb-4e49-9c72-447ac26e9630/.meta.tmp' to config b'/volumes/_nogroup/e2e2f6bb-d3cb-4e49-9c72-447ac26e9630/.meta'
Dec  4 05:39:13 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e2e2f6bb-d3cb-4e49-9c72-447ac26e9630, vol_name:cephfs) < ""
Dec  4 05:39:13 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e2e2f6bb-d3cb-4e49-9c72-447ac26e9630", "format": "json"}]: dispatch
Dec  4 05:39:13 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e2e2f6bb-d3cb-4e49-9c72-447ac26e9630, vol_name:cephfs) < ""
Dec  4 05:39:13 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e2e2f6bb-d3cb-4e49-9c72-447ac26e9630, vol_name:cephfs) < ""
Dec  4 05:39:13 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:39:13 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:39:13 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.iwufnj(active, since 24m)
Dec  4 05:39:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:39:15 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v857: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:39:16 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9a0126db-6550-44ce-a3c1-aa8acaa2b013", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:39:16 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9a0126db-6550-44ce-a3c1-aa8acaa2b013, vol_name:cephfs) < ""
Dec  4 05:39:16 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/9a0126db-6550-44ce-a3c1-aa8acaa2b013/11e6b02f-a848-4901-a396-9e1375701b90'.
Dec  4 05:39:16 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9a0126db-6550-44ce-a3c1-aa8acaa2b013/.meta.tmp'
Dec  4 05:39:16 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9a0126db-6550-44ce-a3c1-aa8acaa2b013/.meta.tmp' to config b'/volumes/_nogroup/9a0126db-6550-44ce-a3c1-aa8acaa2b013/.meta'
Dec  4 05:39:16 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9a0126db-6550-44ce-a3c1-aa8acaa2b013, vol_name:cephfs) < ""
Dec  4 05:39:16 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9a0126db-6550-44ce-a3c1-aa8acaa2b013", "format": "json"}]: dispatch
Dec  4 05:39:16 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9a0126db-6550-44ce-a3c1-aa8acaa2b013, vol_name:cephfs) < ""
Dec  4 05:39:16 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9a0126db-6550-44ce-a3c1-aa8acaa2b013, vol_name:cephfs) < ""
Dec  4 05:39:16 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:39:16 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:39:17 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "e2e2f6bb-d3cb-4e49-9c72-447ac26e9630", "snap_name": "aa7c34cc-89fa-4f37-ac23-f8e6d4d78142", "format": "json"}]: dispatch
Dec  4 05:39:17 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:aa7c34cc-89fa-4f37-ac23-f8e6d4d78142, sub_name:e2e2f6bb-d3cb-4e49-9c72-447ac26e9630, vol_name:cephfs) < ""
Dec  4 05:39:17 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:aa7c34cc-89fa-4f37-ac23-f8e6d4d78142, sub_name:e2e2f6bb-d3cb-4e49-9c72-447ac26e9630, vol_name:cephfs) < ""
Dec  4 05:39:17 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v858: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 1 op/s
Dec  4 05:39:18 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:39:18.983 156095 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=565580d5-3422-4e11-b563-3f1a3db67238, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  4 05:39:19 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:39:19 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v859: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 5.4 KiB/s wr, 2 op/s
Dec  4 05:39:20 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "9a0126db-6550-44ce-a3c1-aa8acaa2b013", "new_size": 2147483648, "format": "json"}]: dispatch
Dec  4 05:39:20 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:9a0126db-6550-44ce-a3c1-aa8acaa2b013, vol_name:cephfs) < ""
Dec  4 05:39:20 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:9a0126db-6550-44ce-a3c1-aa8acaa2b013, vol_name:cephfs) < ""
Dec  4 05:39:21 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v860: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 3 op/s
Dec  4 05:39:22 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9a0126db-6550-44ce-a3c1-aa8acaa2b013", "format": "json"}]: dispatch
Dec  4 05:39:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9a0126db-6550-44ce-a3c1-aa8acaa2b013, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:39:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9a0126db-6550-44ce-a3c1-aa8acaa2b013, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:39:22 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9a0126db-6550-44ce-a3c1-aa8acaa2b013' of type subvolume
Dec  4 05:39:22 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:39:22.123+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9a0126db-6550-44ce-a3c1-aa8acaa2b013' of type subvolume
Dec  4 05:39:22 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9a0126db-6550-44ce-a3c1-aa8acaa2b013", "force": true, "format": "json"}]: dispatch
Dec  4 05:39:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9a0126db-6550-44ce-a3c1-aa8acaa2b013, vol_name:cephfs) < ""
Dec  4 05:39:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9a0126db-6550-44ce-a3c1-aa8acaa2b013'' moved to trashcan
Dec  4 05:39:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:39:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9a0126db-6550-44ce-a3c1-aa8acaa2b013, vol_name:cephfs) < ""
Dec  4 05:39:22 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:39:22.142+0000 7f842649a640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:39:22 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:39:22 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:39:22.142+0000 7f842649a640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:39:22 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:39:22 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:39:22.142+0000 7f842649a640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:39:22 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:39:22 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:39:22.142+0000 7f842649a640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:39:22 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:39:22 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:39:22.142+0000 7f842649a640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:39:22 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:39:22 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:39:22 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:39:22 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:39:22 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:39:22 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:39:22 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:39:22.259+0000 7f8426c9b640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:39:22 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:39:22.259+0000 7f8426c9b640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:39:22 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:39:22.259+0000 7f8426c9b640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:39:22 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:39:22.259+0000 7f8426c9b640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:39:22 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:39:22.259+0000 7f8426c9b640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:39:22 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e2e2f6bb-d3cb-4e49-9c72-447ac26e9630", "snap_name": "aa7c34cc-89fa-4f37-ac23-f8e6d4d78142_cd44680f-beaa-44fb-858d-84098d409d42", "force": true, "format": "json"}]: dispatch
Dec  4 05:39:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aa7c34cc-89fa-4f37-ac23-f8e6d4d78142_cd44680f-beaa-44fb-858d-84098d409d42, sub_name:e2e2f6bb-d3cb-4e49-9c72-447ac26e9630, vol_name:cephfs) < ""
Dec  4 05:39:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e2e2f6bb-d3cb-4e49-9c72-447ac26e9630/.meta.tmp'
Dec  4 05:39:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e2e2f6bb-d3cb-4e49-9c72-447ac26e9630/.meta.tmp' to config b'/volumes/_nogroup/e2e2f6bb-d3cb-4e49-9c72-447ac26e9630/.meta'
Dec  4 05:39:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aa7c34cc-89fa-4f37-ac23-f8e6d4d78142_cd44680f-beaa-44fb-858d-84098d409d42, sub_name:e2e2f6bb-d3cb-4e49-9c72-447ac26e9630, vol_name:cephfs) < ""
Dec  4 05:39:22 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e2e2f6bb-d3cb-4e49-9c72-447ac26e9630", "snap_name": "aa7c34cc-89fa-4f37-ac23-f8e6d4d78142", "force": true, "format": "json"}]: dispatch
Dec  4 05:39:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aa7c34cc-89fa-4f37-ac23-f8e6d4d78142, sub_name:e2e2f6bb-d3cb-4e49-9c72-447ac26e9630, vol_name:cephfs) < ""
Dec  4 05:39:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e2e2f6bb-d3cb-4e49-9c72-447ac26e9630/.meta.tmp'
Dec  4 05:39:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e2e2f6bb-d3cb-4e49-9c72-447ac26e9630/.meta.tmp' to config b'/volumes/_nogroup/e2e2f6bb-d3cb-4e49-9c72-447ac26e9630/.meta'
Dec  4 05:39:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aa7c34cc-89fa-4f37-ac23-f8e6d4d78142, sub_name:e2e2f6bb-d3cb-4e49-9c72-447ac26e9630, vol_name:cephfs) < ""
Dec  4 05:39:23 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v861: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 3 op/s
Dec  4 05:39:23 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.iwufnj(active, since 24m)
Dec  4 05:39:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:39:24 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b67a5b53-5bfd-4560-8728-c671b5b695c4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:39:24 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b67a5b53-5bfd-4560-8728-c671b5b695c4, vol_name:cephfs) < ""
Dec  4 05:39:24 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/b67a5b53-5bfd-4560-8728-c671b5b695c4/f40d18bc-cd97-4fcd-8483-2659863f3efc'.
Dec  4 05:39:24 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b67a5b53-5bfd-4560-8728-c671b5b695c4/.meta.tmp'
Dec  4 05:39:24 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b67a5b53-5bfd-4560-8728-c671b5b695c4/.meta.tmp' to config b'/volumes/_nogroup/b67a5b53-5bfd-4560-8728-c671b5b695c4/.meta'
Dec  4 05:39:24 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b67a5b53-5bfd-4560-8728-c671b5b695c4, vol_name:cephfs) < ""
Dec  4 05:39:24 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b67a5b53-5bfd-4560-8728-c671b5b695c4", "format": "json"}]: dispatch
Dec  4 05:39:24 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b67a5b53-5bfd-4560-8728-c671b5b695c4, vol_name:cephfs) < ""
Dec  4 05:39:24 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b67a5b53-5bfd-4560-8728-c671b5b695c4, vol_name:cephfs) < ""
Dec  4 05:39:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:39:24 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:39:25 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v862: 321 pgs: 321 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 3 op/s
Dec  4 05:39:25 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Dec  4 05:39:25 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Dec  4 05:39:25 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Dec  4 05:39:25 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b6f3ec2c-ea96-4c61-9d0a-ba594fe98997", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:39:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b6f3ec2c-ea96-4c61-9d0a-ba594fe98997, vol_name:cephfs) < ""
Dec  4 05:39:25 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/b6f3ec2c-ea96-4c61-9d0a-ba594fe98997/80abc3c2-3b19-4345-a3c6-9ba9356fed24'.
Dec  4 05:39:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b6f3ec2c-ea96-4c61-9d0a-ba594fe98997/.meta.tmp'
Dec  4 05:39:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b6f3ec2c-ea96-4c61-9d0a-ba594fe98997/.meta.tmp' to config b'/volumes/_nogroup/b6f3ec2c-ea96-4c61-9d0a-ba594fe98997/.meta'
Dec  4 05:39:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b6f3ec2c-ea96-4c61-9d0a-ba594fe98997, vol_name:cephfs) < ""
Dec  4 05:39:25 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b6f3ec2c-ea96-4c61-9d0a-ba594fe98997", "format": "json"}]: dispatch
Dec  4 05:39:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b6f3ec2c-ea96-4c61-9d0a-ba594fe98997, vol_name:cephfs) < ""
Dec  4 05:39:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b6f3ec2c-ea96-4c61-9d0a-ba594fe98997, vol_name:cephfs) < ""
Dec  4 05:39:25 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:39:25 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:39:26 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e2e2f6bb-d3cb-4e49-9c72-447ac26e9630", "format": "json"}]: dispatch
Dec  4 05:39:26 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e2e2f6bb-d3cb-4e49-9c72-447ac26e9630, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:39:26 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e2e2f6bb-d3cb-4e49-9c72-447ac26e9630, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:39:26 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e2e2f6bb-d3cb-4e49-9c72-447ac26e9630' of type subvolume
Dec  4 05:39:26 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:39:26.182+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e2e2f6bb-d3cb-4e49-9c72-447ac26e9630' of type subvolume
Dec  4 05:39:26 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e2e2f6bb-d3cb-4e49-9c72-447ac26e9630", "force": true, "format": "json"}]: dispatch
Dec  4 05:39:26 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e2e2f6bb-d3cb-4e49-9c72-447ac26e9630, vol_name:cephfs) < ""
Dec  4 05:39:26 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e2e2f6bb-d3cb-4e49-9c72-447ac26e9630'' moved to trashcan
Dec  4 05:39:26 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:39:26 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e2e2f6bb-d3cb-4e49-9c72-447ac26e9630, vol_name:cephfs) < ""
Dec  4 05:39:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:39:26
Dec  4 05:39:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:39:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:39:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'default.rgw.meta', 'backups', '.mgr', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'images', 'cephfs.cephfs.data', 'default.rgw.control']
Dec  4 05:39:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:39:26 np0005545273 podman[250022]: 2025-12-04 10:39:26.985537725 +0000 UTC m=+0.080116872 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  4 05:39:27 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v864: 321 pgs: 321 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s wr, 5 op/s
Dec  4 05:39:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:39:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:39:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:39:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:39:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:39:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:39:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:39:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:39:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:39:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:39:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:39:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:39:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:39:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:39:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:39:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:39:28 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c01d539d-f169-44cc-bc00-f705cd397a14", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:39:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c01d539d-f169-44cc-bc00-f705cd397a14, vol_name:cephfs) < ""
Dec  4 05:39:28 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/c01d539d-f169-44cc-bc00-f705cd397a14/59d121f4-da85-41d2-a460-c3e50ff205a8'.
Dec  4 05:39:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c01d539d-f169-44cc-bc00-f705cd397a14/.meta.tmp'
Dec  4 05:39:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c01d539d-f169-44cc-bc00-f705cd397a14/.meta.tmp' to config b'/volumes/_nogroup/c01d539d-f169-44cc-bc00-f705cd397a14/.meta'
Dec  4 05:39:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c01d539d-f169-44cc-bc00-f705cd397a14, vol_name:cephfs) < ""
Dec  4 05:39:28 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c01d539d-f169-44cc-bc00-f705cd397a14", "format": "json"}]: dispatch
Dec  4 05:39:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c01d539d-f169-44cc-bc00-f705cd397a14, vol_name:cephfs) < ""
Dec  4 05:39:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c01d539d-f169-44cc-bc00-f705cd397a14, vol_name:cephfs) < ""
Dec  4 05:39:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:39:28 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:39:28 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "eafbbd68-3ab6-43b4-96ac-e00e60922483", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:39:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:eafbbd68-3ab6-43b4-96ac-e00e60922483, vol_name:cephfs) < ""
Dec  4 05:39:28 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/eafbbd68-3ab6-43b4-96ac-e00e60922483/614ddf80-49a0-47e1-8a8b-70edadadb393'.
Dec  4 05:39:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eafbbd68-3ab6-43b4-96ac-e00e60922483/.meta.tmp'
Dec  4 05:39:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eafbbd68-3ab6-43b4-96ac-e00e60922483/.meta.tmp' to config b'/volumes/_nogroup/eafbbd68-3ab6-43b4-96ac-e00e60922483/.meta'
Dec  4 05:39:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:eafbbd68-3ab6-43b4-96ac-e00e60922483, vol_name:cephfs) < ""
Dec  4 05:39:28 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "eafbbd68-3ab6-43b4-96ac-e00e60922483", "format": "json"}]: dispatch
Dec  4 05:39:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:eafbbd68-3ab6-43b4-96ac-e00e60922483, vol_name:cephfs) < ""
Dec  4 05:39:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:eafbbd68-3ab6-43b4-96ac-e00e60922483, vol_name:cephfs) < ""
Dec  4 05:39:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:39:28 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:39:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:39:29 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v865: 321 pgs: 321 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 20 KiB/s wr, 6 op/s
Dec  4 05:39:29 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "b6f3ec2c-ea96-4c61-9d0a-ba594fe98997", "new_size": 2147483648, "format": "json"}]: dispatch
Dec  4 05:39:29 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:b6f3ec2c-ea96-4c61-9d0a-ba594fe98997, vol_name:cephfs) < ""
Dec  4 05:39:29 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:b6f3ec2c-ea96-4c61-9d0a-ba594fe98997, vol_name:cephfs) < ""
Dec  4 05:39:30 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b6f3ec2c-ea96-4c61-9d0a-ba594fe98997", "format": "json"}]: dispatch
Dec  4 05:39:30 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b6f3ec2c-ea96-4c61-9d0a-ba594fe98997, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:39:30 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b6f3ec2c-ea96-4c61-9d0a-ba594fe98997, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:39:30 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b6f3ec2c-ea96-4c61-9d0a-ba594fe98997' of type subvolume
Dec  4 05:39:30 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:39:30.617+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b6f3ec2c-ea96-4c61-9d0a-ba594fe98997' of type subvolume
Dec  4 05:39:30 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b6f3ec2c-ea96-4c61-9d0a-ba594fe98997", "force": true, "format": "json"}]: dispatch
Dec  4 05:39:30 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b6f3ec2c-ea96-4c61-9d0a-ba594fe98997, vol_name:cephfs) < ""
Dec  4 05:39:30 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b6f3ec2c-ea96-4c61-9d0a-ba594fe98997'' moved to trashcan
Dec  4 05:39:30 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:39:30 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b6f3ec2c-ea96-4c61-9d0a-ba594fe98997, vol_name:cephfs) < ""
Dec  4 05:39:31 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v866: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 30 KiB/s wr, 8 op/s
Dec  4 05:39:32 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "c01d539d-f169-44cc-bc00-f705cd397a14", "snap_name": "44d67cb4-039f-4bcf-973c-10ef9d2a3949", "format": "json"}]: dispatch
Dec  4 05:39:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:44d67cb4-039f-4bcf-973c-10ef9d2a3949, sub_name:c01d539d-f169-44cc-bc00-f705cd397a14, vol_name:cephfs) < ""
Dec  4 05:39:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:44d67cb4-039f-4bcf-973c-10ef9d2a3949, sub_name:c01d539d-f169-44cc-bc00-f705cd397a14, vol_name:cephfs) < ""
Dec  4 05:39:33 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v867: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 30 KiB/s wr, 9 op/s
Dec  4 05:39:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:39:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Dec  4 05:39:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Dec  4 05:39:34 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "eafbbd68-3ab6-43b4-96ac-e00e60922483", "format": "json"}]: dispatch
Dec  4 05:39:34 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:eafbbd68-3ab6-43b4-96ac-e00e60922483, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:39:34 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Dec  4 05:39:34 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:eafbbd68-3ab6-43b4-96ac-e00e60922483, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:39:34 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'eafbbd68-3ab6-43b4-96ac-e00e60922483' of type subvolume
Dec  4 05:39:34 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:39:34.053+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'eafbbd68-3ab6-43b4-96ac-e00e60922483' of type subvolume
Dec  4 05:39:34 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "eafbbd68-3ab6-43b4-96ac-e00e60922483", "force": true, "format": "json"}]: dispatch
Dec  4 05:39:34 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:eafbbd68-3ab6-43b4-96ac-e00e60922483, vol_name:cephfs) < ""
Dec  4 05:39:34 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/eafbbd68-3ab6-43b4-96ac-e00e60922483'' moved to trashcan
Dec  4 05:39:34 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:39:34 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:eafbbd68-3ab6-43b4-96ac-e00e60922483, vol_name:cephfs) < ""
Dec  4 05:39:34 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2dfe7d61-dd18-4df1-ba8a-2c28cc36210a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:39:34 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2dfe7d61-dd18-4df1-ba8a-2c28cc36210a, vol_name:cephfs) < ""
Dec  4 05:39:34 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/2dfe7d61-dd18-4df1-ba8a-2c28cc36210a/c5dc9c35-ebe8-41e2-8bb1-a722819e148b'.
Dec  4 05:39:34 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2dfe7d61-dd18-4df1-ba8a-2c28cc36210a/.meta.tmp'
Dec  4 05:39:34 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2dfe7d61-dd18-4df1-ba8a-2c28cc36210a/.meta.tmp' to config b'/volumes/_nogroup/2dfe7d61-dd18-4df1-ba8a-2c28cc36210a/.meta'
Dec  4 05:39:34 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2dfe7d61-dd18-4df1-ba8a-2c28cc36210a, vol_name:cephfs) < ""
Dec  4 05:39:34 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2dfe7d61-dd18-4df1-ba8a-2c28cc36210a", "format": "json"}]: dispatch
Dec  4 05:39:34 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2dfe7d61-dd18-4df1-ba8a-2c28cc36210a, vol_name:cephfs) < ""
Dec  4 05:39:34 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2dfe7d61-dd18-4df1-ba8a-2c28cc36210a, vol_name:cephfs) < ""
Dec  4 05:39:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:39:34 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:39:35 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v869: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 538 B/s rd, 32 KiB/s wr, 10 op/s
Dec  4 05:39:36 np0005545273 podman[250044]: 2025-12-04 10:39:36.955927494 +0000 UTC m=+0.058210932 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  4 05:39:37 np0005545273 podman[250043]: 2025-12-04 10:39:37.007194415 +0000 UTC m=+0.107600208 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  4 05:39:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:39:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:39:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:39:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:39:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:39:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:39:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:39:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:39:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:39:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:39:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006669683444621841 of space, bias 1.0, pg target 0.20009050333865525 quantized to 32 (current 32)
Dec  4 05:39:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:39:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.7942696887005924e-06 of space, bias 4.0, pg target 0.008153123626440712 quantized to 16 (current 32)
Dec  4 05:39:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:39:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Dec  4 05:39:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:39:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:39:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:39:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 05:39:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:39:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:39:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:39:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 05:39:37 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v870: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 26 KiB/s wr, 8 op/s
Dec  4 05:39:38 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2dfe7d61-dd18-4df1-ba8a-2c28cc36210a", "format": "json"}]: dispatch
Dec  4 05:39:38 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2dfe7d61-dd18-4df1-ba8a-2c28cc36210a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:39:38 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2dfe7d61-dd18-4df1-ba8a-2c28cc36210a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:39:38 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2dfe7d61-dd18-4df1-ba8a-2c28cc36210a' of type subvolume
Dec  4 05:39:38 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:39:38.194+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2dfe7d61-dd18-4df1-ba8a-2c28cc36210a' of type subvolume
Dec  4 05:39:38 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2dfe7d61-dd18-4df1-ba8a-2c28cc36210a", "force": true, "format": "json"}]: dispatch
Dec  4 05:39:38 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2dfe7d61-dd18-4df1-ba8a-2c28cc36210a, vol_name:cephfs) < ""
Dec  4 05:39:38 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/2dfe7d61-dd18-4df1-ba8a-2c28cc36210a'' moved to trashcan
Dec  4 05:39:38 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:39:38 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2dfe7d61-dd18-4df1-ba8a-2c28cc36210a, vol_name:cephfs) < ""
Dec  4 05:39:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:39:39 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v871: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 26 KiB/s wr, 8 op/s
Dec  4 05:39:39 np0005545273 nova_compute[244644]: 2025-12-04 10:39:39.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:39:39 np0005545273 nova_compute[244644]: 2025-12-04 10:39:39.340 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  4 05:39:39 np0005545273 nova_compute[244644]: 2025-12-04 10:39:39.340 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  4 05:39:39 np0005545273 nova_compute[244644]: 2025-12-04 10:39:39.359 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  4 05:39:40 np0005545273 nova_compute[244644]: 2025-12-04 10:39:40.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:39:40 np0005545273 nova_compute[244644]: 2025-12-04 10:39:40.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:39:40 np0005545273 nova_compute[244644]: 2025-12-04 10:39:40.357 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:39:40 np0005545273 nova_compute[244644]: 2025-12-04 10:39:40.358 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:39:40 np0005545273 nova_compute[244644]: 2025-12-04 10:39:40.388 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:39:40 np0005545273 nova_compute[244644]: 2025-12-04 10:39:40.388 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:39:40 np0005545273 nova_compute[244644]: 2025-12-04 10:39:40.389 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:39:40 np0005545273 nova_compute[244644]: 2025-12-04 10:39:40.389 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  4 05:39:40 np0005545273 nova_compute[244644]: 2025-12-04 10:39:40.389 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:39:40 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:39:40 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2455058553' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:39:40 np0005545273 nova_compute[244644]: 2025-12-04 10:39:40.961 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:39:41 np0005545273 nova_compute[244644]: 2025-12-04 10:39:41.125 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  4 05:39:41 np0005545273 nova_compute[244644]: 2025-12-04 10:39:41.126 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5122MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  4 05:39:41 np0005545273 nova_compute[244644]: 2025-12-04 10:39:41.126 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:39:41 np0005545273 nova_compute[244644]: 2025-12-04 10:39:41.127 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:39:41 np0005545273 nova_compute[244644]: 2025-12-04 10:39:41.196 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  4 05:39:41 np0005545273 nova_compute[244644]: 2025-12-04 10:39:41.197 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  4 05:39:41 np0005545273 nova_compute[244644]: 2025-12-04 10:39:41.213 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:39:41 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v872: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 17 KiB/s wr, 6 op/s
Dec  4 05:39:41 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "082afd37-c266-4ac4-8cb3-b2d98a4b42b6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:39:41 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:082afd37-c266-4ac4-8cb3-b2d98a4b42b6, vol_name:cephfs) < ""
Dec  4 05:39:41 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/082afd37-c266-4ac4-8cb3-b2d98a4b42b6/9a8f2f78-2832-4ae7-987b-f210b3ecae09'.
Dec  4 05:39:41 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/082afd37-c266-4ac4-8cb3-b2d98a4b42b6/.meta.tmp'
Dec  4 05:39:41 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/082afd37-c266-4ac4-8cb3-b2d98a4b42b6/.meta.tmp' to config b'/volumes/_nogroup/082afd37-c266-4ac4-8cb3-b2d98a4b42b6/.meta'
Dec  4 05:39:41 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:082afd37-c266-4ac4-8cb3-b2d98a4b42b6, vol_name:cephfs) < ""
Dec  4 05:39:41 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "082afd37-c266-4ac4-8cb3-b2d98a4b42b6", "format": "json"}]: dispatch
Dec  4 05:39:41 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:082afd37-c266-4ac4-8cb3-b2d98a4b42b6, vol_name:cephfs) < ""
Dec  4 05:39:41 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:082afd37-c266-4ac4-8cb3-b2d98a4b42b6, vol_name:cephfs) < ""
Dec  4 05:39:41 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:39:41 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:39:41 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:39:41 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4101248298' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:39:41 np0005545273 nova_compute[244644]: 2025-12-04 10:39:41.812 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:39:41 np0005545273 nova_compute[244644]: 2025-12-04 10:39:41.817 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  4 05:39:41 np0005545273 nova_compute[244644]: 2025-12-04 10:39:41.833 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  4 05:39:41 np0005545273 nova_compute[244644]: 2025-12-04 10:39:41.835 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  4 05:39:41 np0005545273 nova_compute[244644]: 2025-12-04 10:39:41.835 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:39:42 np0005545273 nova_compute[244644]: 2025-12-04 10:39:42.816 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:39:42 np0005545273 nova_compute[244644]: 2025-12-04 10:39:42.817 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:39:42 np0005545273 nova_compute[244644]: 2025-12-04 10:39:42.817 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:39:42 np0005545273 nova_compute[244644]: 2025-12-04 10:39:42.817 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:39:43 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v873: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 17 KiB/s wr, 6 op/s
Dec  4 05:39:43 np0005545273 nova_compute[244644]: 2025-12-04 10:39:43.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:39:43 np0005545273 nova_compute[244644]: 2025-12-04 10:39:43.338 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  4 05:39:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:39:45 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v874: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 544 B/s rd, 15 KiB/s wr, 5 op/s
Dec  4 05:39:46 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "082afd37-c266-4ac4-8cb3-b2d98a4b42b6", "format": "json"}]: dispatch
Dec  4 05:39:46 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:082afd37-c266-4ac4-8cb3-b2d98a4b42b6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:39:46 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:082afd37-c266-4ac4-8cb3-b2d98a4b42b6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:39:46 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '082afd37-c266-4ac4-8cb3-b2d98a4b42b6' of type subvolume
Dec  4 05:39:46 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:39:46.879+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '082afd37-c266-4ac4-8cb3-b2d98a4b42b6' of type subvolume
Dec  4 05:39:46 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "082afd37-c266-4ac4-8cb3-b2d98a4b42b6", "force": true, "format": "json"}]: dispatch
Dec  4 05:39:46 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:082afd37-c266-4ac4-8cb3-b2d98a4b42b6, vol_name:cephfs) < ""
Dec  4 05:39:46 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/082afd37-c266-4ac4-8cb3-b2d98a4b42b6'' moved to trashcan
Dec  4 05:39:46 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:39:46 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:082afd37-c266-4ac4-8cb3-b2d98a4b42b6, vol_name:cephfs) < ""
Dec  4 05:39:47 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v875: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 17 KiB/s wr, 6 op/s
Dec  4 05:39:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:39:49 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v876: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 10 KiB/s wr, 5 op/s
Dec  4 05:39:50 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "706dbf68-b212-4a2b-9b03-317bdcefb564", "format": "json"}]: dispatch
Dec  4 05:39:50 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:706dbf68-b212-4a2b-9b03-317bdcefb564, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:39:50 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:706dbf68-b212-4a2b-9b03-317bdcefb564, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:39:51 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v877: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 15 KiB/s wr, 4 op/s
Dec  4 05:39:51 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Dec  4 05:39:51 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:39:51.390611) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  4 05:39:51 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Dec  4 05:39:51 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844791390643, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2130, "num_deletes": 251, "total_data_size": 3651521, "memory_usage": 3725792, "flush_reason": "Manual Compaction"}
Dec  4 05:39:51 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Dec  4 05:39:51 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c01d539d-f169-44cc-bc00-f705cd397a14", "snap_name": "44d67cb4-039f-4bcf-973c-10ef9d2a3949_38bff781-e943-4d39-a749-423f72e5abda", "force": true, "format": "json"}]: dispatch
Dec  4 05:39:51 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:44d67cb4-039f-4bcf-973c-10ef9d2a3949_38bff781-e943-4d39-a749-423f72e5abda, sub_name:c01d539d-f169-44cc-bc00-f705cd397a14, vol_name:cephfs) < ""
Dec  4 05:39:51 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844791414625, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 3573459, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16384, "largest_seqno": 18513, "table_properties": {"data_size": 3563758, "index_size": 6131, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20132, "raw_average_key_size": 20, "raw_value_size": 3543996, "raw_average_value_size": 3565, "num_data_blocks": 276, "num_entries": 994, "num_filter_entries": 994, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764844580, "oldest_key_time": 1764844580, "file_creation_time": 1764844791, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:39:51 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 24383 microseconds, and 9387 cpu microseconds.
Dec  4 05:39:51 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  4 05:39:51 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:39:51.414969) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 3573459 bytes OK
Dec  4 05:39:51 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:39:51.415067) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Dec  4 05:39:51 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:39:51.417323) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Dec  4 05:39:51 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:39:51.417402) EVENT_LOG_v1 {"time_micros": 1764844791417390, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  4 05:39:51 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:39:51.417456) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  4 05:39:51 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3642468, prev total WAL file size 3642468, number of live WAL files 2.
Dec  4 05:39:51 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:39:51 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:39:51.419083) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Dec  4 05:39:51 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  4 05:39:51 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(3489KB)], [38(7735KB)]
Dec  4 05:39:51 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844791419228, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 11494113, "oldest_snapshot_seqno": -1}
Dec  4 05:39:51 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c01d539d-f169-44cc-bc00-f705cd397a14/.meta.tmp'
Dec  4 05:39:51 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c01d539d-f169-44cc-bc00-f705cd397a14/.meta.tmp' to config b'/volumes/_nogroup/c01d539d-f169-44cc-bc00-f705cd397a14/.meta'
Dec  4 05:39:51 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:44d67cb4-039f-4bcf-973c-10ef9d2a3949_38bff781-e943-4d39-a749-423f72e5abda, sub_name:c01d539d-f169-44cc-bc00-f705cd397a14, vol_name:cephfs) < ""
Dec  4 05:39:51 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c01d539d-f169-44cc-bc00-f705cd397a14", "snap_name": "44d67cb4-039f-4bcf-973c-10ef9d2a3949", "force": true, "format": "json"}]: dispatch
Dec  4 05:39:51 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:44d67cb4-039f-4bcf-973c-10ef9d2a3949, sub_name:c01d539d-f169-44cc-bc00-f705cd397a14, vol_name:cephfs) < ""
Dec  4 05:39:51 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c01d539d-f169-44cc-bc00-f705cd397a14/.meta.tmp'
Dec  4 05:39:51 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c01d539d-f169-44cc-bc00-f705cd397a14/.meta.tmp' to config b'/volumes/_nogroup/c01d539d-f169-44cc-bc00-f705cd397a14/.meta'
Dec  4 05:39:51 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4529 keys, 9700592 bytes, temperature: kUnknown
Dec  4 05:39:51 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844791493606, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 9700592, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9666817, "index_size": 21377, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11333, "raw_key_size": 109679, "raw_average_key_size": 24, "raw_value_size": 9581511, "raw_average_value_size": 2115, "num_data_blocks": 907, "num_entries": 4529, "num_filter_entries": 4529, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 1764844791, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:39:51 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  4 05:39:51 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:39:51.494165) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 9700592 bytes
Dec  4 05:39:51 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:39:51.496183) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.0 rd, 130.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 7.6 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 5050, records dropped: 521 output_compression: NoCompression
Dec  4 05:39:51 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:39:51.496217) EVENT_LOG_v1 {"time_micros": 1764844791496199, "job": 18, "event": "compaction_finished", "compaction_time_micros": 74628, "compaction_time_cpu_micros": 22394, "output_level": 6, "num_output_files": 1, "total_output_size": 9700592, "num_input_records": 5050, "num_output_records": 4529, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  4 05:39:51 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:39:51 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844791497963, "job": 18, "event": "table_file_deletion", "file_number": 40}
Dec  4 05:39:51 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:39:51 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844791501300, "job": 18, "event": "table_file_deletion", "file_number": 38}
Dec  4 05:39:51 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:39:51.418967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:39:51 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:39:51.501483) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:39:51 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:39:51.501493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:39:51 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:39:51.501494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:39:51 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:39:51.501496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:39:51 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:39:51.501498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:39:51 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:44d67cb4-039f-4bcf-973c-10ef9d2a3949, sub_name:c01d539d-f169-44cc-bc00-f705cd397a14, vol_name:cephfs) < ""
Dec  4 05:39:51 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "706dbf68-b212-4a2b-9b03-317bdcefb564_3681d897-123a-4773-ade2-a9eef0b417b5", "force": true, "format": "json"}]: dispatch
Dec  4 05:39:51 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:706dbf68-b212-4a2b-9b03-317bdcefb564_3681d897-123a-4773-ade2-a9eef0b417b5, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:39:51 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp'
Dec  4 05:39:51 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp' to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta'
Dec  4 05:39:51 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:706dbf68-b212-4a2b-9b03-317bdcefb564_3681d897-123a-4773-ade2-a9eef0b417b5, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:39:51 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "706dbf68-b212-4a2b-9b03-317bdcefb564", "force": true, "format": "json"}]: dispatch
Dec  4 05:39:51 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:706dbf68-b212-4a2b-9b03-317bdcefb564, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:39:51 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp'
Dec  4 05:39:51 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp' to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta'
Dec  4 05:39:51 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:706dbf68-b212-4a2b-9b03-317bdcefb564, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:39:53 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v878: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 9.2 KiB/s wr, 4 op/s
Dec  4 05:39:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:39:54 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c6122866-729b-4644-a1c7-d8745b4ab929", "size": 4294967296, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:39:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:c6122866-729b-4644-a1c7-d8745b4ab929, vol_name:cephfs) < ""
Dec  4 05:39:54 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/c6122866-729b-4644-a1c7-d8745b4ab929/69a6afba-83cb-47e6-956d-f0583049d7f7'.
Dec  4 05:39:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c6122866-729b-4644-a1c7-d8745b4ab929/.meta.tmp'
Dec  4 05:39:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c6122866-729b-4644-a1c7-d8745b4ab929/.meta.tmp' to config b'/volumes/_nogroup/c6122866-729b-4644-a1c7-d8745b4ab929/.meta'
Dec  4 05:39:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:c6122866-729b-4644-a1c7-d8745b4ab929, vol_name:cephfs) < ""
Dec  4 05:39:54 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c6122866-729b-4644-a1c7-d8745b4ab929", "format": "json"}]: dispatch
Dec  4 05:39:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c6122866-729b-4644-a1c7-d8745b4ab929, vol_name:cephfs) < ""
Dec  4 05:39:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c6122866-729b-4644-a1c7-d8745b4ab929, vol_name:cephfs) < ""
Dec  4 05:39:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:39:54 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:39:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:39:54.905 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:39:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:39:54.905 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:39:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:39:54.906 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:39:55 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c01d539d-f169-44cc-bc00-f705cd397a14", "format": "json"}]: dispatch
Dec  4 05:39:55 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c01d539d-f169-44cc-bc00-f705cd397a14, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:39:55 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c01d539d-f169-44cc-bc00-f705cd397a14, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:39:55 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c01d539d-f169-44cc-bc00-f705cd397a14' of type subvolume
Dec  4 05:39:55 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:39:55.022+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c01d539d-f169-44cc-bc00-f705cd397a14' of type subvolume
Dec  4 05:39:55 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c01d539d-f169-44cc-bc00-f705cd397a14", "force": true, "format": "json"}]: dispatch
Dec  4 05:39:55 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c01d539d-f169-44cc-bc00-f705cd397a14, vol_name:cephfs) < ""
Dec  4 05:39:55 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c01d539d-f169-44cc-bc00-f705cd397a14'' moved to trashcan
Dec  4 05:39:55 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:39:55 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c01d539d-f169-44cc-bc00-f705cd397a14, vol_name:cephfs) < ""
Dec  4 05:39:55 np0005545273 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec  4 05:39:55 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v879: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 8.7 KiB/s wr, 3 op/s
Dec  4 05:39:56 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Dec  4 05:39:56 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Dec  4 05:39:56 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Dec  4 05:39:57 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v881: 321 pgs: 321 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 26 KiB/s wr, 5 op/s
Dec  4 05:39:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:39:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:39:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:39:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:39:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:39:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:39:57 np0005545273 podman[250140]: 2025-12-04 10:39:57.97787551 +0000 UTC m=+0.087609545 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  4 05:39:58 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bd99e196-9855-42c0-b3ab-7d9a58ace6f7", "size": 3221225472, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:39:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:bd99e196-9855-42c0-b3ab-7d9a58ace6f7, vol_name:cephfs) < ""
Dec  4 05:39:58 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/bd99e196-9855-42c0-b3ab-7d9a58ace6f7/9fb477bd-f6d6-4a93-81fa-6fa31c946d8f'.
Dec  4 05:39:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bd99e196-9855-42c0-b3ab-7d9a58ace6f7/.meta.tmp'
Dec  4 05:39:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bd99e196-9855-42c0-b3ab-7d9a58ace6f7/.meta.tmp' to config b'/volumes/_nogroup/bd99e196-9855-42c0-b3ab-7d9a58ace6f7/.meta'
Dec  4 05:39:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:bd99e196-9855-42c0-b3ab-7d9a58ace6f7, vol_name:cephfs) < ""
Dec  4 05:39:58 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bd99e196-9855-42c0-b3ab-7d9a58ace6f7", "format": "json"}]: dispatch
Dec  4 05:39:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bd99e196-9855-42c0-b3ab-7d9a58ace6f7, vol_name:cephfs) < ""
Dec  4 05:39:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bd99e196-9855-42c0-b3ab-7d9a58ace6f7, vol_name:cephfs) < ""
Dec  4 05:39:58 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:39:58 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:39:58 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b67a5b53-5bfd-4560-8728-c671b5b695c4", "format": "json"}]: dispatch
Dec  4 05:39:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b67a5b53-5bfd-4560-8728-c671b5b695c4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:39:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b67a5b53-5bfd-4560-8728-c671b5b695c4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:39:58 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b67a5b53-5bfd-4560-8728-c671b5b695c4' of type subvolume
Dec  4 05:39:58 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:39:58.734+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b67a5b53-5bfd-4560-8728-c671b5b695c4' of type subvolume
Dec  4 05:39:58 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b67a5b53-5bfd-4560-8728-c671b5b695c4", "force": true, "format": "json"}]: dispatch
Dec  4 05:39:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b67a5b53-5bfd-4560-8728-c671b5b695c4, vol_name:cephfs) < ""
Dec  4 05:39:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b67a5b53-5bfd-4560-8728-c671b5b695c4'' moved to trashcan
Dec  4 05:39:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:39:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b67a5b53-5bfd-4560-8728-c671b5b695c4, vol_name:cephfs) < ""
Dec  4 05:39:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:39:59 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v882: 321 pgs: 321 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 27 KiB/s wr, 7 op/s
Dec  4 05:40:00 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "82370328-067d-4dd3-9bef-3f2224bb43b9", "format": "json"}]: dispatch
Dec  4 05:40:00 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:82370328-067d-4dd3-9bef-3f2224bb43b9, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:40:00 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:82370328-067d-4dd3-9bef-3f2224bb43b9, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:40:01 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v883: 321 pgs: 321 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 35 KiB/s wr, 8 op/s
Dec  4 05:40:02 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c6122866-729b-4644-a1c7-d8745b4ab929", "format": "json"}]: dispatch
Dec  4 05:40:02 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c6122866-729b-4644-a1c7-d8745b4ab929, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:40:02 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c6122866-729b-4644-a1c7-d8745b4ab929, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:40:02 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c6122866-729b-4644-a1c7-d8745b4ab929' of type subvolume
Dec  4 05:40:02 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:02.359+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c6122866-729b-4644-a1c7-d8745b4ab929' of type subvolume
Dec  4 05:40:02 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c6122866-729b-4644-a1c7-d8745b4ab929", "force": true, "format": "json"}]: dispatch
Dec  4 05:40:02 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c6122866-729b-4644-a1c7-d8745b4ab929, vol_name:cephfs) < ""
Dec  4 05:40:02 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c6122866-729b-4644-a1c7-d8745b4ab929'' moved to trashcan
Dec  4 05:40:02 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:40:02 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c6122866-729b-4644-a1c7-d8745b4ab929, vol_name:cephfs) < ""
Dec  4 05:40:03 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v884: 321 pgs: 321 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 34 KiB/s wr, 8 op/s
Dec  4 05:40:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:40:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Dec  4 05:40:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Dec  4 05:40:04 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Dec  4 05:40:04 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "76e0aa1d-e6e6-4ec3-a58c-79587b9868cb", "format": "json"}]: dispatch
Dec  4 05:40:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:76e0aa1d-e6e6-4ec3-a58c-79587b9868cb, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:40:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:76e0aa1d-e6e6-4ec3-a58c-79587b9868cb, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:40:05 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v886: 321 pgs: 321 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 920 B/s rd, 17 KiB/s wr, 6 op/s
Dec  4 05:40:06 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bd99e196-9855-42c0-b3ab-7d9a58ace6f7", "format": "json"}]: dispatch
Dec  4 05:40:06 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:bd99e196-9855-42c0-b3ab-7d9a58ace6f7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:40:06 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:bd99e196-9855-42c0-b3ab-7d9a58ace6f7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:40:06 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bd99e196-9855-42c0-b3ab-7d9a58ace6f7' of type subvolume
Dec  4 05:40:06 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:06.059+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bd99e196-9855-42c0-b3ab-7d9a58ace6f7' of type subvolume
Dec  4 05:40:06 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bd99e196-9855-42c0-b3ab-7d9a58ace6f7", "force": true, "format": "json"}]: dispatch
Dec  4 05:40:06 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bd99e196-9855-42c0-b3ab-7d9a58ace6f7, vol_name:cephfs) < ""
Dec  4 05:40:06 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/bd99e196-9855-42c0-b3ab-7d9a58ace6f7'' moved to trashcan
Dec  4 05:40:06 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:40:06 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bd99e196-9855-42c0-b3ab-7d9a58ace6f7, vol_name:cephfs) < ""
Dec  4 05:40:06 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b863b6ff-799e-4ddb-80e5-dee26b0df34e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:40:06 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b863b6ff-799e-4ddb-80e5-dee26b0df34e, vol_name:cephfs) < ""
Dec  4 05:40:06 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/b863b6ff-799e-4ddb-80e5-dee26b0df34e/2e820380-b271-4f3e-8b24-f787b9d60a68'.
Dec  4 05:40:06 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b863b6ff-799e-4ddb-80e5-dee26b0df34e/.meta.tmp'
Dec  4 05:40:06 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b863b6ff-799e-4ddb-80e5-dee26b0df34e/.meta.tmp' to config b'/volumes/_nogroup/b863b6ff-799e-4ddb-80e5-dee26b0df34e/.meta'
Dec  4 05:40:06 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b863b6ff-799e-4ddb-80e5-dee26b0df34e, vol_name:cephfs) < ""
Dec  4 05:40:06 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b863b6ff-799e-4ddb-80e5-dee26b0df34e", "format": "json"}]: dispatch
Dec  4 05:40:06 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b863b6ff-799e-4ddb-80e5-dee26b0df34e, vol_name:cephfs) < ""
Dec  4 05:40:06 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b863b6ff-799e-4ddb-80e5-dee26b0df34e, vol_name:cephfs) < ""
Dec  4 05:40:06 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:40:06 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:40:07 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v887: 321 pgs: 321 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 24 KiB/s wr, 7 op/s
Dec  4 05:40:07 np0005545273 podman[250166]: 2025-12-04 10:40:07.949396867 +0000 UTC m=+0.050523235 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:40:07 np0005545273 podman[250165]: 2025-12-04 10:40:07.970607458 +0000 UTC m=+0.082857709 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec  4 05:40:09 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "2080cf6d-717b-4750-a2b8-d93db758ab96", "format": "json"}]: dispatch
Dec  4 05:40:09 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:2080cf6d-717b-4750-a2b8-d93db758ab96, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:40:09 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:2080cf6d-717b-4750-a2b8-d93db758ab96, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:40:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:40:09 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Dec  4 05:40:09 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:09.054648) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  4 05:40:09 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Dec  4 05:40:09 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844809054701, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 447, "num_deletes": 251, "total_data_size": 306517, "memory_usage": 315696, "flush_reason": "Manual Compaction"}
Dec  4 05:40:09 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Dec  4 05:40:09 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844809058816, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 276638, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18514, "largest_seqno": 18960, "table_properties": {"data_size": 274024, "index_size": 650, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6967, "raw_average_key_size": 20, "raw_value_size": 268642, "raw_average_value_size": 776, "num_data_blocks": 29, "num_entries": 346, "num_filter_entries": 346, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764844792, "oldest_key_time": 1764844792, "file_creation_time": 1764844809, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:40:09 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 4190 microseconds, and 1667 cpu microseconds.
Dec  4 05:40:09 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  4 05:40:09 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:09.058847) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 276638 bytes OK
Dec  4 05:40:09 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:09.058861) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Dec  4 05:40:09 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:09.061855) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Dec  4 05:40:09 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:09.061868) EVENT_LOG_v1 {"time_micros": 1764844809061864, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  4 05:40:09 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:09.061884) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  4 05:40:09 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 303737, prev total WAL file size 303737, number of live WAL files 2.
Dec  4 05:40:09 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:40:09 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:09.062225) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353031' seq:72057594037927935, type:22 .. '6D67727374617400373532' seq:0, type:0; will stop at (end)
Dec  4 05:40:09 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  4 05:40:09 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(270KB)], [41(9473KB)]
Dec  4 05:40:09 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844809062244, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 9977230, "oldest_snapshot_seqno": -1}
Dec  4 05:40:09 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4363 keys, 6670725 bytes, temperature: kUnknown
Dec  4 05:40:09 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844809101019, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6670725, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6642365, "index_size": 16347, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10949, "raw_key_size": 106773, "raw_average_key_size": 24, "raw_value_size": 6564249, "raw_average_value_size": 1504, "num_data_blocks": 687, "num_entries": 4363, "num_filter_entries": 4363, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 1764844809, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:40:09 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  4 05:40:09 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:09.101348) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6670725 bytes
Dec  4 05:40:09 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:09.103007) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 256.3 rd, 171.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 9.3 +0.0 blob) out(6.4 +0.0 blob), read-write-amplify(60.2) write-amplify(24.1) OK, records in: 4875, records dropped: 512 output_compression: NoCompression
Dec  4 05:40:09 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:09.103024) EVENT_LOG_v1 {"time_micros": 1764844809103015, "job": 20, "event": "compaction_finished", "compaction_time_micros": 38930, "compaction_time_cpu_micros": 17444, "output_level": 6, "num_output_files": 1, "total_output_size": 6670725, "num_input_records": 4875, "num_output_records": 4363, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  4 05:40:09 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:40:09 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844809103204, "job": 20, "event": "table_file_deletion", "file_number": 43}
Dec  4 05:40:09 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:40:09 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844809104896, "job": 20, "event": "table_file_deletion", "file_number": 41}
Dec  4 05:40:09 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:09.062181) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:40:09 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:09.104970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:40:09 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:09.104975) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:40:09 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:09.104977) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:40:09 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:09.104979) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:40:09 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:09.104981) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:40:09 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v888: 321 pgs: 321 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 24 KiB/s wr, 6 op/s
Dec  4 05:40:10 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b863b6ff-799e-4ddb-80e5-dee26b0df34e", "format": "json"}]: dispatch
Dec  4 05:40:10 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b863b6ff-799e-4ddb-80e5-dee26b0df34e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:40:10 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b863b6ff-799e-4ddb-80e5-dee26b0df34e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:40:10 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b863b6ff-799e-4ddb-80e5-dee26b0df34e' of type subvolume
Dec  4 05:40:10 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:10.917+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b863b6ff-799e-4ddb-80e5-dee26b0df34e' of type subvolume
Dec  4 05:40:10 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b863b6ff-799e-4ddb-80e5-dee26b0df34e", "force": true, "format": "json"}]: dispatch
Dec  4 05:40:10 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b863b6ff-799e-4ddb-80e5-dee26b0df34e, vol_name:cephfs) < ""
Dec  4 05:40:10 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b863b6ff-799e-4ddb-80e5-dee26b0df34e'' moved to trashcan
Dec  4 05:40:10 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:40:10 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b863b6ff-799e-4ddb-80e5-dee26b0df34e, vol_name:cephfs) < ""
Dec  4 05:40:11 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v889: 321 pgs: 321 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 29 KiB/s wr, 5 op/s
Dec  4 05:40:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:40:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:40:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:40:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:40:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  4 05:40:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4122217808' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec  4 05:40:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  4 05:40:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4122217808' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec  4 05:40:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:40:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:40:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:40:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:40:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:40:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:40:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:40:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:40:11 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:40:11 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:40:11 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:40:11 np0005545273 podman[250354]: 2025-12-04 10:40:11.973622353 +0000 UTC m=+0.118964478 container create 319c4d8d36dda1686fee00fb0b3c0e1256f1013a30b36e52552af447335476cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_rosalind, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec  4 05:40:11 np0005545273 podman[250354]: 2025-12-04 10:40:11.88612333 +0000 UTC m=+0.031465555 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:40:12 np0005545273 systemd[1]: Started libpod-conmon-319c4d8d36dda1686fee00fb0b3c0e1256f1013a30b36e52552af447335476cb.scope.
Dec  4 05:40:12 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:40:12 np0005545273 podman[250354]: 2025-12-04 10:40:12.09753562 +0000 UTC m=+0.242877775 container init 319c4d8d36dda1686fee00fb0b3c0e1256f1013a30b36e52552af447335476cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:40:12 np0005545273 podman[250354]: 2025-12-04 10:40:12.106254395 +0000 UTC m=+0.251596530 container start 319c4d8d36dda1686fee00fb0b3c0e1256f1013a30b36e52552af447335476cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_rosalind, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:40:12 np0005545273 podman[250354]: 2025-12-04 10:40:12.109639958 +0000 UTC m=+0.254982123 container attach 319c4d8d36dda1686fee00fb0b3c0e1256f1013a30b36e52552af447335476cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_rosalind, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:40:12 np0005545273 compassionate_rosalind[250370]: 167 167
Dec  4 05:40:12 np0005545273 systemd[1]: libpod-319c4d8d36dda1686fee00fb0b3c0e1256f1013a30b36e52552af447335476cb.scope: Deactivated successfully.
Dec  4 05:40:12 np0005545273 podman[250354]: 2025-12-04 10:40:12.11379171 +0000 UTC m=+0.259133855 container died 319c4d8d36dda1686fee00fb0b3c0e1256f1013a30b36e52552af447335476cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_rosalind, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:40:12 np0005545273 systemd[1]: var-lib-containers-storage-overlay-ae79790d6cb72c79b32d067011bdbe737915b230bb6659a8ba0ef9efcab60696-merged.mount: Deactivated successfully.
Dec  4 05:40:12 np0005545273 podman[250354]: 2025-12-04 10:40:12.162514268 +0000 UTC m=+0.307856403 container remove 319c4d8d36dda1686fee00fb0b3c0e1256f1013a30b36e52552af447335476cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec  4 05:40:12 np0005545273 systemd[1]: libpod-conmon-319c4d8d36dda1686fee00fb0b3c0e1256f1013a30b36e52552af447335476cb.scope: Deactivated successfully.
Dec  4 05:40:12 np0005545273 podman[250394]: 2025-12-04 10:40:12.327421715 +0000 UTC m=+0.044656970 container create 844823ae409baa845ab6c12e30fcff74dc1c29f28f518a69a20f00529cd49704 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_dijkstra, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:40:12 np0005545273 systemd[1]: Started libpod-conmon-844823ae409baa845ab6c12e30fcff74dc1c29f28f518a69a20f00529cd49704.scope.
Dec  4 05:40:12 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:40:12 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b2b4dd31d745bcd8f02db6b357a9f985c8d0b78a6466bc1f7cc4d805013ddf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:40:12 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b2b4dd31d745bcd8f02db6b357a9f985c8d0b78a6466bc1f7cc4d805013ddf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:40:12 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b2b4dd31d745bcd8f02db6b357a9f985c8d0b78a6466bc1f7cc4d805013ddf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:40:12 np0005545273 podman[250394]: 2025-12-04 10:40:12.308566082 +0000 UTC m=+0.025801337 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:40:12 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b2b4dd31d745bcd8f02db6b357a9f985c8d0b78a6466bc1f7cc4d805013ddf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:40:12 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7b2b4dd31d745bcd8f02db6b357a9f985c8d0b78a6466bc1f7cc4d805013ddf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:40:12 np0005545273 podman[250394]: 2025-12-04 10:40:12.413961844 +0000 UTC m=+0.131197089 container init 844823ae409baa845ab6c12e30fcff74dc1c29f28f518a69a20f00529cd49704 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Dec  4 05:40:12 np0005545273 podman[250394]: 2025-12-04 10:40:12.424370739 +0000 UTC m=+0.141605974 container start 844823ae409baa845ab6c12e30fcff74dc1c29f28f518a69a20f00529cd49704 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_dijkstra, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  4 05:40:12 np0005545273 podman[250394]: 2025-12-04 10:40:12.428173983 +0000 UTC m=+0.145409218 container attach 844823ae409baa845ab6c12e30fcff74dc1c29f28f518a69a20f00529cd49704 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_dijkstra, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  4 05:40:12 np0005545273 mystifying_dijkstra[250411]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:40:12 np0005545273 mystifying_dijkstra[250411]: --> All data devices are unavailable
Dec  4 05:40:12 np0005545273 systemd[1]: libpod-844823ae409baa845ab6c12e30fcff74dc1c29f28f518a69a20f00529cd49704.scope: Deactivated successfully.
Dec  4 05:40:12 np0005545273 podman[250394]: 2025-12-04 10:40:12.903050834 +0000 UTC m=+0.620286099 container died 844823ae409baa845ab6c12e30fcff74dc1c29f28f518a69a20f00529cd49704 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:40:12 np0005545273 systemd[1]: var-lib-containers-storage-overlay-c7b2b4dd31d745bcd8f02db6b357a9f985c8d0b78a6466bc1f7cc4d805013ddf-merged.mount: Deactivated successfully.
Dec  4 05:40:12 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "980cc482-537a-4856-a203-512899e0bf5c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:40:12 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:980cc482-537a-4856-a203-512899e0bf5c, vol_name:cephfs) < ""
Dec  4 05:40:12 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/980cc482-537a-4856-a203-512899e0bf5c/185c6afc-9ae6-4332-b81a-975debb7627f'.
Dec  4 05:40:12 np0005545273 podman[250394]: 2025-12-04 10:40:12.965300556 +0000 UTC m=+0.682535791 container remove 844823ae409baa845ab6c12e30fcff74dc1c29f28f518a69a20f00529cd49704 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_dijkstra, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:40:12 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/980cc482-537a-4856-a203-512899e0bf5c/.meta.tmp'
Dec  4 05:40:12 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/980cc482-537a-4856-a203-512899e0bf5c/.meta.tmp' to config b'/volumes/_nogroup/980cc482-537a-4856-a203-512899e0bf5c/.meta'
Dec  4 05:40:12 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:980cc482-537a-4856-a203-512899e0bf5c, vol_name:cephfs) < ""
Dec  4 05:40:12 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "980cc482-537a-4856-a203-512899e0bf5c", "format": "json"}]: dispatch
Dec  4 05:40:12 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:980cc482-537a-4856-a203-512899e0bf5c, vol_name:cephfs) < ""
Dec  4 05:40:12 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:980cc482-537a-4856-a203-512899e0bf5c, vol_name:cephfs) < ""
Dec  4 05:40:12 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:40:12 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:40:12 np0005545273 systemd[1]: libpod-conmon-844823ae409baa845ab6c12e30fcff74dc1c29f28f518a69a20f00529cd49704.scope: Deactivated successfully.
Dec  4 05:40:13 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "994f41bf-ed68-4080-9c5f-d4c5df7f4273", "format": "json"}]: dispatch
Dec  4 05:40:13 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:994f41bf-ed68-4080-9c5f-d4c5df7f4273, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:40:13 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:994f41bf-ed68-4080-9c5f-d4c5df7f4273, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:40:13 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v890: 321 pgs: 321 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 29 KiB/s wr, 5 op/s
Dec  4 05:40:13 np0005545273 podman[250506]: 2025-12-04 10:40:13.460792242 +0000 UTC m=+0.040962538 container create 481755a314971bb3ae0d236ada0b0559a48e2714ad88fdc18f38663165f8e7cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  4 05:40:13 np0005545273 systemd[1]: Started libpod-conmon-481755a314971bb3ae0d236ada0b0559a48e2714ad88fdc18f38663165f8e7cb.scope.
Dec  4 05:40:13 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:40:13 np0005545273 podman[250506]: 2025-12-04 10:40:13.444086552 +0000 UTC m=+0.024256868 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:40:13 np0005545273 podman[250506]: 2025-12-04 10:40:13.542971194 +0000 UTC m=+0.123141570 container init 481755a314971bb3ae0d236ada0b0559a48e2714ad88fdc18f38663165f8e7cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bouman, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:40:13 np0005545273 podman[250506]: 2025-12-04 10:40:13.552082599 +0000 UTC m=+0.132252895 container start 481755a314971bb3ae0d236ada0b0559a48e2714ad88fdc18f38663165f8e7cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bouman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:40:13 np0005545273 podman[250506]: 2025-12-04 10:40:13.555768689 +0000 UTC m=+0.135939005 container attach 481755a314971bb3ae0d236ada0b0559a48e2714ad88fdc18f38663165f8e7cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bouman, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:40:13 np0005545273 eloquent_bouman[250523]: 167 167
Dec  4 05:40:13 np0005545273 systemd[1]: libpod-481755a314971bb3ae0d236ada0b0559a48e2714ad88fdc18f38663165f8e7cb.scope: Deactivated successfully.
Dec  4 05:40:13 np0005545273 podman[250506]: 2025-12-04 10:40:13.560068214 +0000 UTC m=+0.140238520 container died 481755a314971bb3ae0d236ada0b0559a48e2714ad88fdc18f38663165f8e7cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bouman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:40:13 np0005545273 systemd[1]: var-lib-containers-storage-overlay-e35f82feb258702b10241eee00bf7eabafef7b975c4149d1534db3997865f0f4-merged.mount: Deactivated successfully.
Dec  4 05:40:13 np0005545273 podman[250506]: 2025-12-04 10:40:13.596369647 +0000 UTC m=+0.176539933 container remove 481755a314971bb3ae0d236ada0b0559a48e2714ad88fdc18f38663165f8e7cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bouman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  4 05:40:13 np0005545273 systemd[1]: libpod-conmon-481755a314971bb3ae0d236ada0b0559a48e2714ad88fdc18f38663165f8e7cb.scope: Deactivated successfully.
Dec  4 05:40:13 np0005545273 podman[250547]: 2025-12-04 10:40:13.788197686 +0000 UTC m=+0.049847257 container create 84f1257dffc9012cd0a64ba8bbaf1dfda1f0f2c5741538c947302cbe122f457a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Dec  4 05:40:13 np0005545273 systemd[1]: Started libpod-conmon-84f1257dffc9012cd0a64ba8bbaf1dfda1f0f2c5741538c947302cbe122f457a.scope.
Dec  4 05:40:13 np0005545273 podman[250547]: 2025-12-04 10:40:13.761255533 +0000 UTC m=+0.022905134 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:40:13 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:40:13 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e2719085009e4b797a21ecbf387f11a2e6dac50ec47f1d7ddceb4d00d5afc69/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:40:13 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e2719085009e4b797a21ecbf387f11a2e6dac50ec47f1d7ddceb4d00d5afc69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:40:13 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e2719085009e4b797a21ecbf387f11a2e6dac50ec47f1d7ddceb4d00d5afc69/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:40:13 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e2719085009e4b797a21ecbf387f11a2e6dac50ec47f1d7ddceb4d00d5afc69/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:40:14 np0005545273 podman[250547]: 2025-12-04 10:40:14.24979825 +0000 UTC m=+0.511447861 container init 84f1257dffc9012cd0a64ba8bbaf1dfda1f0f2c5741538c947302cbe122f457a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_franklin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:40:14 np0005545273 podman[250547]: 2025-12-04 10:40:14.257281584 +0000 UTC m=+0.518931155 container start 84f1257dffc9012cd0a64ba8bbaf1dfda1f0f2c5741538c947302cbe122f457a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:40:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:40:14 np0005545273 podman[250547]: 2025-12-04 10:40:14.271947905 +0000 UTC m=+0.533597506 container attach 84f1257dffc9012cd0a64ba8bbaf1dfda1f0f2c5741538c947302cbe122f457a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_franklin, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:40:14 np0005545273 angry_franklin[250564]: {
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:    "0": [
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:        {
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            "devices": [
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "/dev/loop3"
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            ],
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            "lv_name": "ceph_lv0",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            "lv_size": "21470642176",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            "name": "ceph_lv0",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            "tags": {
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.cluster_name": "ceph",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.crush_device_class": "",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.encrypted": "0",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.objectstore": "bluestore",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.osd_id": "0",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.type": "block",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.vdo": "0",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.with_tpm": "0"
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            },
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            "type": "block",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            "vg_name": "ceph_vg0"
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:        }
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:    ],
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:    "1": [
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:        {
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            "devices": [
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "/dev/loop4"
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            ],
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            "lv_name": "ceph_lv1",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            "lv_size": "21470642176",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            "name": "ceph_lv1",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            "tags": {
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.cluster_name": "ceph",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.crush_device_class": "",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.encrypted": "0",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.objectstore": "bluestore",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.osd_id": "1",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.type": "block",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.vdo": "0",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.with_tpm": "0"
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            },
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            "type": "block",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            "vg_name": "ceph_vg1"
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:        }
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:    ],
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:    "2": [
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:        {
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            "devices": [
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "/dev/loop5"
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            ],
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            "lv_name": "ceph_lv2",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            "lv_size": "21470642176",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            "name": "ceph_lv2",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            "tags": {
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.cluster_name": "ceph",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.crush_device_class": "",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.encrypted": "0",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.objectstore": "bluestore",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.osd_id": "2",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.type": "block",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.vdo": "0",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:                "ceph.with_tpm": "0"
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            },
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            "type": "block",
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:            "vg_name": "ceph_vg2"
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:        }
Dec  4 05:40:14 np0005545273 angry_franklin[250564]:    ]
Dec  4 05:40:14 np0005545273 angry_franklin[250564]: }
Dec  4 05:40:14 np0005545273 systemd[1]: libpod-84f1257dffc9012cd0a64ba8bbaf1dfda1f0f2c5741538c947302cbe122f457a.scope: Deactivated successfully.
Dec  4 05:40:14 np0005545273 podman[250547]: 2025-12-04 10:40:14.543552786 +0000 UTC m=+0.805202347 container died 84f1257dffc9012cd0a64ba8bbaf1dfda1f0f2c5741538c947302cbe122f457a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec  4 05:40:14 np0005545273 systemd[1]: var-lib-containers-storage-overlay-5e2719085009e4b797a21ecbf387f11a2e6dac50ec47f1d7ddceb4d00d5afc69-merged.mount: Deactivated successfully.
Dec  4 05:40:14 np0005545273 podman[250547]: 2025-12-04 10:40:14.586994025 +0000 UTC m=+0.848643586 container remove 84f1257dffc9012cd0a64ba8bbaf1dfda1f0f2c5741538c947302cbe122f457a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  4 05:40:14 np0005545273 systemd[1]: libpod-conmon-84f1257dffc9012cd0a64ba8bbaf1dfda1f0f2c5741538c947302cbe122f457a.scope: Deactivated successfully.
Dec  4 05:40:15 np0005545273 podman[250648]: 2025-12-04 10:40:15.068430907 +0000 UTC m=+0.045263145 container create 5a0b910899c0117035b51cbc2773f83f766b5e8b669b05ae0041cab91afa5b1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:40:15 np0005545273 systemd[1]: Started libpod-conmon-5a0b910899c0117035b51cbc2773f83f766b5e8b669b05ae0041cab91afa5b1f.scope.
Dec  4 05:40:15 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:40:15 np0005545273 podman[250648]: 2025-12-04 10:40:15.048929637 +0000 UTC m=+0.025761875 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:40:15 np0005545273 podman[250648]: 2025-12-04 10:40:15.14662133 +0000 UTC m=+0.123453558 container init 5a0b910899c0117035b51cbc2773f83f766b5e8b669b05ae0041cab91afa5b1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_tharp, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec  4 05:40:15 np0005545273 podman[250648]: 2025-12-04 10:40:15.152650178 +0000 UTC m=+0.129482386 container start 5a0b910899c0117035b51cbc2773f83f766b5e8b669b05ae0041cab91afa5b1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_tharp, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec  4 05:40:15 np0005545273 podman[250648]: 2025-12-04 10:40:15.156044151 +0000 UTC m=+0.132876389 container attach 5a0b910899c0117035b51cbc2773f83f766b5e8b669b05ae0041cab91afa5b1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:40:15 np0005545273 romantic_tharp[250663]: 167 167
Dec  4 05:40:15 np0005545273 systemd[1]: libpod-5a0b910899c0117035b51cbc2773f83f766b5e8b669b05ae0041cab91afa5b1f.scope: Deactivated successfully.
Dec  4 05:40:15 np0005545273 podman[250648]: 2025-12-04 10:40:15.159200069 +0000 UTC m=+0.136032287 container died 5a0b910899c0117035b51cbc2773f83f766b5e8b669b05ae0041cab91afa5b1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:40:15 np0005545273 systemd[1]: var-lib-containers-storage-overlay-63f735ef0aed61b70df94dff91d37327e2e019dacf321bd813dbf8c7dc6ec013-merged.mount: Deactivated successfully.
Dec  4 05:40:15 np0005545273 podman[250648]: 2025-12-04 10:40:15.196256901 +0000 UTC m=+0.173089119 container remove 5a0b910899c0117035b51cbc2773f83f766b5e8b669b05ae0041cab91afa5b1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_tharp, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec  4 05:40:15 np0005545273 systemd[1]: libpod-conmon-5a0b910899c0117035b51cbc2773f83f766b5e8b669b05ae0041cab91afa5b1f.scope: Deactivated successfully.
Dec  4 05:40:15 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v891: 321 pgs: 321 active+clean; 42 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 363 B/s rd, 25 KiB/s wr, 5 op/s
Dec  4 05:40:15 np0005545273 podman[250685]: 2025-12-04 10:40:15.346697901 +0000 UTC m=+0.039192475 container create 7f4d10eaf693da43d03e759f080dc4a4649339a076cc395b0f4f69cd2b2a504b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:40:15 np0005545273 systemd[1]: Started libpod-conmon-7f4d10eaf693da43d03e759f080dc4a4649339a076cc395b0f4f69cd2b2a504b.scope.
Dec  4 05:40:15 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:40:15 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dcfa83a2a8628f92e2d17ffb0bb1a78179a1be9ffb98386ce67dd807e4127d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:40:15 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dcfa83a2a8628f92e2d17ffb0bb1a78179a1be9ffb98386ce67dd807e4127d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:40:15 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dcfa83a2a8628f92e2d17ffb0bb1a78179a1be9ffb98386ce67dd807e4127d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:40:15 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dcfa83a2a8628f92e2d17ffb0bb1a78179a1be9ffb98386ce67dd807e4127d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:40:15 np0005545273 podman[250685]: 2025-12-04 10:40:15.422822644 +0000 UTC m=+0.115317228 container init 7f4d10eaf693da43d03e759f080dc4a4649339a076cc395b0f4f69cd2b2a504b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_blackwell, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  4 05:40:15 np0005545273 podman[250685]: 2025-12-04 10:40:15.329000226 +0000 UTC m=+0.021494830 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:40:15 np0005545273 podman[250685]: 2025-12-04 10:40:15.433193559 +0000 UTC m=+0.125688133 container start 7f4d10eaf693da43d03e759f080dc4a4649339a076cc395b0f4f69cd2b2a504b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_blackwell, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec  4 05:40:15 np0005545273 podman[250685]: 2025-12-04 10:40:15.436971192 +0000 UTC m=+0.129465966 container attach 7f4d10eaf693da43d03e759f080dc4a4649339a076cc395b0f4f69cd2b2a504b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  4 05:40:15 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:40:15.635 156095 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'aa:78:67', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:d2:c7:24:ee:78'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  4 05:40:15 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:40:15.638 156095 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  4 05:40:16 np0005545273 lvm[250777]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:40:16 np0005545273 lvm[250777]: VG ceph_vg0 finished
Dec  4 05:40:16 np0005545273 lvm[250780]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:40:16 np0005545273 lvm[250780]: VG ceph_vg1 finished
Dec  4 05:40:16 np0005545273 lvm[250782]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:40:16 np0005545273 lvm[250782]: VG ceph_vg2 finished
Dec  4 05:40:16 np0005545273 magical_blackwell[250701]: {}
Dec  4 05:40:16 np0005545273 systemd[1]: libpod-7f4d10eaf693da43d03e759f080dc4a4649339a076cc395b0f4f69cd2b2a504b.scope: Deactivated successfully.
Dec  4 05:40:16 np0005545273 systemd[1]: libpod-7f4d10eaf693da43d03e759f080dc4a4649339a076cc395b0f4f69cd2b2a504b.scope: Consumed 1.357s CPU time.
Dec  4 05:40:16 np0005545273 podman[250685]: 2025-12-04 10:40:16.315362768 +0000 UTC m=+1.007857332 container died 7f4d10eaf693da43d03e759f080dc4a4649339a076cc395b0f4f69cd2b2a504b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_blackwell, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:40:16 np0005545273 systemd[1]: var-lib-containers-storage-overlay-0dcfa83a2a8628f92e2d17ffb0bb1a78179a1be9ffb98386ce67dd807e4127d2-merged.mount: Deactivated successfully.
Dec  4 05:40:16 np0005545273 podman[250685]: 2025-12-04 10:40:16.367019068 +0000 UTC m=+1.059513642 container remove 7f4d10eaf693da43d03e759f080dc4a4649339a076cc395b0f4f69cd2b2a504b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  4 05:40:16 np0005545273 systemd[1]: libpod-conmon-7f4d10eaf693da43d03e759f080dc4a4649339a076cc395b0f4f69cd2b2a504b.scope: Deactivated successfully.
Dec  4 05:40:16 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:40:16 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:40:16 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:40:16 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:40:17 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v892: 321 pgs: 321 active+clean; 43 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 32 KiB/s wr, 5 op/s
Dec  4 05:40:17 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:40:17 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:40:17 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "980cc482-537a-4856-a203-512899e0bf5c", "snap_name": "9dac2a63-84c3-4448-8251-c9b0776fc4fe", "format": "json"}]: dispatch
Dec  4 05:40:17 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:9dac2a63-84c3-4448-8251-c9b0776fc4fe, sub_name:980cc482-537a-4856-a203-512899e0bf5c, vol_name:cephfs) < ""
Dec  4 05:40:17 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:9dac2a63-84c3-4448-8251-c9b0776fc4fe, sub_name:980cc482-537a-4856-a203-512899e0bf5c, vol_name:cephfs) < ""
Dec  4 05:40:17 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "7c725858-4362-45de-9321-14ab6b5f8ef0", "format": "json"}]: dispatch
Dec  4 05:40:17 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7c725858-4362-45de-9321-14ab6b5f8ef0, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:40:17 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7c725858-4362-45de-9321-14ab6b5f8ef0, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:40:18 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:40:18.639 156095 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=565580d5-3422-4e11-b563-3f1a3db67238, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  4 05:40:19 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:40:19 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v893: 321 pgs: 321 active+clean; 43 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 25 KiB/s wr, 5 op/s
Dec  4 05:40:20 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "86910b9a-b822-4f70-bcbf-6e5bf72bae29", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:40:20 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, vol_name:cephfs) < ""
Dec  4 05:40:20 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/86910b9a-b822-4f70-bcbf-6e5bf72bae29/11ab6c18-79c1-476e-b2f1-acdd14fba99c'.
Dec  4 05:40:20 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/86910b9a-b822-4f70-bcbf-6e5bf72bae29/.meta.tmp'
Dec  4 05:40:20 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/86910b9a-b822-4f70-bcbf-6e5bf72bae29/.meta.tmp' to config b'/volumes/_nogroup/86910b9a-b822-4f70-bcbf-6e5bf72bae29/.meta'
Dec  4 05:40:20 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, vol_name:cephfs) < ""
Dec  4 05:40:20 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "86910b9a-b822-4f70-bcbf-6e5bf72bae29", "format": "json"}]: dispatch
Dec  4 05:40:20 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, vol_name:cephfs) < ""
Dec  4 05:40:20 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, vol_name:cephfs) < ""
Dec  4 05:40:20 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:40:20 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:40:21 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v894: 321 pgs: 321 active+clean; 43 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 34 KiB/s wr, 5 op/s
Dec  4 05:40:21 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "980cc482-537a-4856-a203-512899e0bf5c", "snap_name": "9dac2a63-84c3-4448-8251-c9b0776fc4fe", "target_sub_name": "7cfdcab3-2a40-4b85-9afc-15385e3510f9", "format": "json"}]: dispatch
Dec  4 05:40:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:9dac2a63-84c3-4448-8251-c9b0776fc4fe, sub_name:980cc482-537a-4856-a203-512899e0bf5c, target_sub_name:7cfdcab3-2a40-4b85-9afc-15385e3510f9, vol_name:cephfs) < ""
Dec  4 05:40:21 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/7cfdcab3-2a40-4b85-9afc-15385e3510f9/3d262762-681b-471d-848e-05e9faf04c07'.
Dec  4 05:40:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/7cfdcab3-2a40-4b85-9afc-15385e3510f9/.meta.tmp'
Dec  4 05:40:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7cfdcab3-2a40-4b85-9afc-15385e3510f9/.meta.tmp' to config b'/volumes/_nogroup/7cfdcab3-2a40-4b85-9afc-15385e3510f9/.meta'
Dec  4 05:40:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.clone_index] tracking-id f1f06a15-8b3b-472e-a302-1e70e5eecda7 for path b'/volumes/_nogroup/7cfdcab3-2a40-4b85-9afc-15385e3510f9'
Dec  4 05:40:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/980cc482-537a-4856-a203-512899e0bf5c/.meta.tmp'
Dec  4 05:40:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/980cc482-537a-4856-a203-512899e0bf5c/.meta.tmp' to config b'/volumes/_nogroup/980cc482-537a-4856-a203-512899e0bf5c/.meta'
Dec  4 05:40:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:40:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.stats_util] initiating progress reporting for clones...
Dec  4 05:40:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.stats_util] progress reporting for clones has been initiated
Dec  4 05:40:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:9dac2a63-84c3-4448-8251-c9b0776fc4fe, sub_name:980cc482-537a-4856-a203-512899e0bf5c, target_sub_name:7cfdcab3-2a40-4b85-9afc-15385e3510f9, vol_name:cephfs) < ""
Dec  4 05:40:21 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7cfdcab3-2a40-4b85-9afc-15385e3510f9", "format": "json"}]: dispatch
Dec  4 05:40:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7cfdcab3-2a40-4b85-9afc-15385e3510f9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:40:21 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:40:21 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:40:21 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:40:21 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:40:21 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:40:21 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:21.478+0000 7f84294a0640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:40:21 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:21.478+0000 7f84294a0640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:40:21 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:21.478+0000 7f84294a0640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:40:21 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:21.478+0000 7f84294a0640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:40:21 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:21.478+0000 7f84294a0640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:40:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7cfdcab3-2a40-4b85-9afc-15385e3510f9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:40:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/7cfdcab3-2a40-4b85-9afc-15385e3510f9
Dec  4 05:40:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, 7cfdcab3-2a40-4b85-9afc-15385e3510f9)
Dec  4 05:40:21 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:40:21 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:40:21 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:21.497+0000 7f8429ca1640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:40:21 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:40:21 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:40:21 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:21.497+0000 7f8429ca1640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:40:21 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:40:21 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:21.497+0000 7f8429ca1640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:40:21 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:21.497+0000 7f8429ca1640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:40:21 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:21.497+0000 7f8429ca1640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:40:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, 7cfdcab3-2a40-4b85-9afc-15385e3510f9) -- by 0 seconds
Dec  4 05:40:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/7cfdcab3-2a40-4b85-9afc-15385e3510f9/.meta.tmp'
Dec  4 05:40:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7cfdcab3-2a40-4b85-9afc-15385e3510f9/.meta.tmp' to config b'/volumes/_nogroup/7cfdcab3-2a40-4b85-9afc-15385e3510f9/.meta'
Dec  4 05:40:21 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "7c725858-4362-45de-9321-14ab6b5f8ef0_c4d58189-d550-43b2-accd-301d015ec2f8", "force": true, "format": "json"}]: dispatch
Dec  4 05:40:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7c725858-4362-45de-9321-14ab6b5f8ef0_c4d58189-d550-43b2-accd-301d015ec2f8, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:40:22 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:40:22 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:40:22 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:40:22 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:40:22 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:40:22 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:22.485+0000 7f840518a640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:40:22 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:22.485+0000 7f840518a640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:40:22 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:22.485+0000 7f840518a640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:40:22 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:22.485+0000 7f840518a640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:40:22 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:22.485+0000 7f840518a640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:40:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/980cc482-537a-4856-a203-512899e0bf5c/.snap/9dac2a63-84c3-4448-8251-c9b0776fc4fe/185c6afc-9ae6-4332-b81a-975debb7627f' to b'/volumes/_nogroup/7cfdcab3-2a40-4b85-9afc-15385e3510f9/3d262762-681b-471d-848e-05e9faf04c07'
Dec  4 05:40:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp'
Dec  4 05:40:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp' to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta'
Dec  4 05:40:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7c725858-4362-45de-9321-14ab6b5f8ef0_c4d58189-d550-43b2-accd-301d015ec2f8, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:40:22 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "7c725858-4362-45de-9321-14ab6b5f8ef0", "force": true, "format": "json"}]: dispatch
Dec  4 05:40:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7c725858-4362-45de-9321-14ab6b5f8ef0, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:40:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp'
Dec  4 05:40:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp' to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta'
Dec  4 05:40:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7c725858-4362-45de-9321-14ab6b5f8ef0, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:40:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/7cfdcab3-2a40-4b85-9afc-15385e3510f9/.meta.tmp'
Dec  4 05:40:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7cfdcab3-2a40-4b85-9afc-15385e3510f9/.meta.tmp' to config b'/volumes/_nogroup/7cfdcab3-2a40-4b85-9afc-15385e3510f9/.meta'
Dec  4 05:40:22 np0005545273 ceph-mgr[75651]: [progress INFO root] update: starting ev mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%)
Dec  4 05:40:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.clone_index] untracking f1f06a15-8b3b-472e-a302-1e70e5eecda7
Dec  4 05:40:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/980cc482-537a-4856-a203-512899e0bf5c/.meta.tmp'
Dec  4 05:40:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/980cc482-537a-4856-a203-512899e0bf5c/.meta.tmp' to config b'/volumes/_nogroup/980cc482-537a-4856-a203-512899e0bf5c/.meta'
Dec  4 05:40:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/7cfdcab3-2a40-4b85-9afc-15385e3510f9/.meta.tmp'
Dec  4 05:40:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7cfdcab3-2a40-4b85-9afc-15385e3510f9/.meta.tmp' to config b'/volumes/_nogroup/7cfdcab3-2a40-4b85-9afc-15385e3510f9/.meta'
Dec  4 05:40:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, 7cfdcab3-2a40-4b85-9afc-15385e3510f9)
Dec  4 05:40:22 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7cfdcab3-2a40-4b85-9afc-15385e3510f9", "format": "json"}]: dispatch
Dec  4 05:40:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7cfdcab3-2a40-4b85-9afc-15385e3510f9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:40:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7cfdcab3-2a40-4b85-9afc-15385e3510f9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:40:22 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7cfdcab3-2a40-4b85-9afc-15385e3510f9", "format": "json"}]: dispatch
Dec  4 05:40:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7cfdcab3-2a40-4b85-9afc-15385e3510f9, vol_name:cephfs) < ""
Dec  4 05:40:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7cfdcab3-2a40-4b85-9afc-15385e3510f9, vol_name:cephfs) < ""
Dec  4 05:40:22 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:40:22 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:40:23 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v895: 321 pgs: 321 active+clean; 43 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 19 KiB/s wr, 5 op/s
Dec  4 05:40:23 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.iwufnj(active, since 25m)
Dec  4 05:40:23 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "86910b9a-b822-4f70-bcbf-6e5bf72bae29", "snap_name": "ed47c747-af46-4672-ae2b-cea707990167", "format": "json"}]: dispatch
Dec  4 05:40:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ed47c747-af46-4672-ae2b-cea707990167, sub_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, vol_name:cephfs) < ""
Dec  4 05:40:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.stats_util] removing progress bars from "ceph status" output
Dec  4 05:40:23 np0005545273 ceph-mgr[75651]: [progress INFO root] complete: finished ev mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%)
Dec  4 05:40:23 np0005545273 ceph-mgr[75651]: [progress INFO root] Completed event mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%) in 1 seconds
Dec  4 05:40:23 np0005545273 ceph-mgr[75651]: [progress WARNING root] complete: ev mgr-vol-total-clones does not exist
Dec  4 05:40:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.stats_util] finished removing progress bars from "ceph status" output
Dec  4 05:40:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.stats_util] marking this RTimer thread as finished; thread object ID - <volumes.fs.stats_util.CloneProgressReporter object at 0x7f8435ce5760>
Dec  4 05:40:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:40:24 np0005545273 ceph-mgr[75651]: [progress INFO root] Writing back 17 completed events
Dec  4 05:40:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  4 05:40:24 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:40:25 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v896: 321 pgs: 321 active+clean; 43 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 18 KiB/s wr, 3 op/s
Dec  4 05:40:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ed47c747-af46-4672-ae2b-cea707990167, sub_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, vol_name:cephfs) < ""
Dec  4 05:40:25 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Dec  4 05:40:25 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Dec  4 05:40:25 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:40:25 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "994f41bf-ed68-4080-9c5f-d4c5df7f4273_83da3e77-2028-498b-a84c-b65dbead073b", "force": true, "format": "json"}]: dispatch
Dec  4 05:40:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:994f41bf-ed68-4080-9c5f-d4c5df7f4273_83da3e77-2028-498b-a84c-b65dbead073b, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:40:25 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Dec  4 05:40:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp'
Dec  4 05:40:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp' to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta'
Dec  4 05:40:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:994f41bf-ed68-4080-9c5f-d4c5df7f4273_83da3e77-2028-498b-a84c-b65dbead073b, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:40:25 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "994f41bf-ed68-4080-9c5f-d4c5df7f4273", "force": true, "format": "json"}]: dispatch
Dec  4 05:40:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:994f41bf-ed68-4080-9c5f-d4c5df7f4273, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:40:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp'
Dec  4 05:40:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp' to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta'
Dec  4 05:40:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:994f41bf-ed68-4080-9c5f-d4c5df7f4273, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:40:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:40:26
Dec  4 05:40:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:40:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:40:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'vms']
Dec  4 05:40:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:40:27 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v898: 321 pgs: 321 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 613 B/s rd, 55 KiB/s wr, 8 op/s
Dec  4 05:40:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:40:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:40:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:40:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:40:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:40:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:40:28 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "86910b9a-b822-4f70-bcbf-6e5bf72bae29", "snap_name": "bf6cd08b-18da-4eb1-b598-27dbb9cb5f7c", "format": "json"}]: dispatch
Dec  4 05:40:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:bf6cd08b-18da-4eb1-b598-27dbb9cb5f7c, sub_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, vol_name:cephfs) < ""
Dec  4 05:40:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:bf6cd08b-18da-4eb1-b598-27dbb9cb5f7c, sub_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, vol_name:cephfs) < ""
Dec  4 05:40:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:40:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:40:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:40:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:40:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:40:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:40:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:40:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:40:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:40:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:40:28 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7cfdcab3-2a40-4b85-9afc-15385e3510f9", "format": "json"}]: dispatch
Dec  4 05:40:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7cfdcab3-2a40-4b85-9afc-15385e3510f9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:40:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7cfdcab3-2a40-4b85-9afc-15385e3510f9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:40:28 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7cfdcab3-2a40-4b85-9afc-15385e3510f9", "force": true, "format": "json"}]: dispatch
Dec  4 05:40:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7cfdcab3-2a40-4b85-9afc-15385e3510f9, vol_name:cephfs) < ""
Dec  4 05:40:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7cfdcab3-2a40-4b85-9afc-15385e3510f9'' moved to trashcan
Dec  4 05:40:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:40:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7cfdcab3-2a40-4b85-9afc-15385e3510f9, vol_name:cephfs) < ""
Dec  4 05:40:28 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "2080cf6d-717b-4750-a2b8-d93db758ab96_ac09cea8-b1d8-4db8-91c9-4bd4ac8f268a", "force": true, "format": "json"}]: dispatch
Dec  4 05:40:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2080cf6d-717b-4750-a2b8-d93db758ab96_ac09cea8-b1d8-4db8-91c9-4bd4ac8f268a, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:40:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp'
Dec  4 05:40:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp' to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta'
Dec  4 05:40:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2080cf6d-717b-4750-a2b8-d93db758ab96_ac09cea8-b1d8-4db8-91c9-4bd4ac8f268a, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:40:28 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "2080cf6d-717b-4750-a2b8-d93db758ab96", "force": true, "format": "json"}]: dispatch
Dec  4 05:40:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2080cf6d-717b-4750-a2b8-d93db758ab96, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:40:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp'
Dec  4 05:40:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp' to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta'
Dec  4 05:40:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2080cf6d-717b-4750-a2b8-d93db758ab96, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:40:28 np0005545273 podman[250859]: 2025-12-04 10:40:28.976177191 +0000 UTC m=+0.068700020 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  4 05:40:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:40:29 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v899: 321 pgs: 321 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 56 KiB/s wr, 10 op/s
Dec  4 05:40:30 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Dec  4 05:40:30 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Dec  4 05:40:30 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Dec  4 05:40:31 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v901: 321 pgs: 321 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 82 KiB/s wr, 12 op/s
Dec  4 05:40:31 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "980cc482-537a-4856-a203-512899e0bf5c", "snap_name": "9dac2a63-84c3-4448-8251-c9b0776fc4fe_f0ef63cc-82f0-4e30-af39-c9b2aa8ae4cb", "force": true, "format": "json"}]: dispatch
Dec  4 05:40:31 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9dac2a63-84c3-4448-8251-c9b0776fc4fe_f0ef63cc-82f0-4e30-af39-c9b2aa8ae4cb, sub_name:980cc482-537a-4856-a203-512899e0bf5c, vol_name:cephfs) < ""
Dec  4 05:40:31 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/980cc482-537a-4856-a203-512899e0bf5c/.meta.tmp'
Dec  4 05:40:31 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/980cc482-537a-4856-a203-512899e0bf5c/.meta.tmp' to config b'/volumes/_nogroup/980cc482-537a-4856-a203-512899e0bf5c/.meta'
Dec  4 05:40:31 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9dac2a63-84c3-4448-8251-c9b0776fc4fe_f0ef63cc-82f0-4e30-af39-c9b2aa8ae4cb, sub_name:980cc482-537a-4856-a203-512899e0bf5c, vol_name:cephfs) < ""
Dec  4 05:40:31 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "980cc482-537a-4856-a203-512899e0bf5c", "snap_name": "9dac2a63-84c3-4448-8251-c9b0776fc4fe", "force": true, "format": "json"}]: dispatch
Dec  4 05:40:31 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9dac2a63-84c3-4448-8251-c9b0776fc4fe, sub_name:980cc482-537a-4856-a203-512899e0bf5c, vol_name:cephfs) < ""
Dec  4 05:40:31 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/980cc482-537a-4856-a203-512899e0bf5c/.meta.tmp'
Dec  4 05:40:31 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/980cc482-537a-4856-a203-512899e0bf5c/.meta.tmp' to config b'/volumes/_nogroup/980cc482-537a-4856-a203-512899e0bf5c/.meta'
Dec  4 05:40:31 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9dac2a63-84c3-4448-8251-c9b0776fc4fe, sub_name:980cc482-537a-4856-a203-512899e0bf5c, vol_name:cephfs) < ""
Dec  4 05:40:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Dec  4 05:40:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Dec  4 05:40:31 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Dec  4 05:40:32 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "76e0aa1d-e6e6-4ec3-a58c-79587b9868cb_023fae40-59a0-48fc-a8f7-4e2554504fc0", "force": true, "format": "json"}]: dispatch
Dec  4 05:40:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:76e0aa1d-e6e6-4ec3-a58c-79587b9868cb_023fae40-59a0-48fc-a8f7-4e2554504fc0, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:40:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp'
Dec  4 05:40:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp' to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta'
Dec  4 05:40:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:76e0aa1d-e6e6-4ec3-a58c-79587b9868cb_023fae40-59a0-48fc-a8f7-4e2554504fc0, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:40:32 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "76e0aa1d-e6e6-4ec3-a58c-79587b9868cb", "force": true, "format": "json"}]: dispatch
Dec  4 05:40:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:76e0aa1d-e6e6-4ec3-a58c-79587b9868cb, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:40:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp'
Dec  4 05:40:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp' to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta'
Dec  4 05:40:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:76e0aa1d-e6e6-4ec3-a58c-79587b9868cb, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:40:32 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "86910b9a-b822-4f70-bcbf-6e5bf72bae29", "snap_name": "bf6cd08b-18da-4eb1-b598-27dbb9cb5f7c_528a732c-0e32-4483-b208-a88167d57126", "force": true, "format": "json"}]: dispatch
Dec  4 05:40:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bf6cd08b-18da-4eb1-b598-27dbb9cb5f7c_528a732c-0e32-4483-b208-a88167d57126, sub_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, vol_name:cephfs) < ""
Dec  4 05:40:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/86910b9a-b822-4f70-bcbf-6e5bf72bae29/.meta.tmp'
Dec  4 05:40:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/86910b9a-b822-4f70-bcbf-6e5bf72bae29/.meta.tmp' to config b'/volumes/_nogroup/86910b9a-b822-4f70-bcbf-6e5bf72bae29/.meta'
Dec  4 05:40:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bf6cd08b-18da-4eb1-b598-27dbb9cb5f7c_528a732c-0e32-4483-b208-a88167d57126, sub_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, vol_name:cephfs) < ""
Dec  4 05:40:32 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "86910b9a-b822-4f70-bcbf-6e5bf72bae29", "snap_name": "bf6cd08b-18da-4eb1-b598-27dbb9cb5f7c", "force": true, "format": "json"}]: dispatch
Dec  4 05:40:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bf6cd08b-18da-4eb1-b598-27dbb9cb5f7c, sub_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, vol_name:cephfs) < ""
Dec  4 05:40:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/86910b9a-b822-4f70-bcbf-6e5bf72bae29/.meta.tmp'
Dec  4 05:40:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/86910b9a-b822-4f70-bcbf-6e5bf72bae29/.meta.tmp' to config b'/volumes/_nogroup/86910b9a-b822-4f70-bcbf-6e5bf72bae29/.meta'
Dec  4 05:40:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bf6cd08b-18da-4eb1-b598-27dbb9cb5f7c, sub_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, vol_name:cephfs) < ""
Dec  4 05:40:33 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v903: 321 pgs: 321 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 88 KiB/s wr, 17 op/s
Dec  4 05:40:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  4 05:40:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Dec  4 05:40:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Dec  4 05:40:34 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Dec  4 05:40:35 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "980cc482-537a-4856-a203-512899e0bf5c", "format": "json"}]: dispatch
Dec  4 05:40:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:980cc482-537a-4856-a203-512899e0bf5c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:40:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:980cc482-537a-4856-a203-512899e0bf5c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:40:35 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '980cc482-537a-4856-a203-512899e0bf5c' of type subvolume
Dec  4 05:40:35 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:35.208+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '980cc482-537a-4856-a203-512899e0bf5c' of type subvolume
Dec  4 05:40:35 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "980cc482-537a-4856-a203-512899e0bf5c", "force": true, "format": "json"}]: dispatch
Dec  4 05:40:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:980cc482-537a-4856-a203-512899e0bf5c, vol_name:cephfs) < ""
Dec  4 05:40:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/980cc482-537a-4856-a203-512899e0bf5c'' moved to trashcan
Dec  4 05:40:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:40:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:980cc482-537a-4856-a203-512899e0bf5c, vol_name:cephfs) < ""
Dec  4 05:40:35 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Dec  4 05:40:35 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Dec  4 05:40:35 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Dec  4 05:40:35 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v906: 321 pgs: 321 active+clean; 43 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 49 KiB/s wr, 11 op/s
Dec  4 05:40:35 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "82370328-067d-4dd3-9bef-3f2224bb43b9_3df864a0-f947-4289-aa56-ed58a988606b", "force": true, "format": "json"}]: dispatch
Dec  4 05:40:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:82370328-067d-4dd3-9bef-3f2224bb43b9_3df864a0-f947-4289-aa56-ed58a988606b, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:40:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp'
Dec  4 05:40:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp' to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta'
Dec  4 05:40:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:82370328-067d-4dd3-9bef-3f2224bb43b9_3df864a0-f947-4289-aa56-ed58a988606b, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:40:35 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "snap_name": "82370328-067d-4dd3-9bef-3f2224bb43b9", "force": true, "format": "json"}]: dispatch
Dec  4 05:40:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:82370328-067d-4dd3-9bef-3f2224bb43b9, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:40:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp'
Dec  4 05:40:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta.tmp' to config b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3/.meta'
Dec  4 05:40:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:82370328-067d-4dd3-9bef-3f2224bb43b9, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:40:35 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "86910b9a-b822-4f70-bcbf-6e5bf72bae29", "snap_name": "ed47c747-af46-4672-ae2b-cea707990167_36b818cb-a4e7-4cde-b290-576e92a76d22", "force": true, "format": "json"}]: dispatch
Dec  4 05:40:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ed47c747-af46-4672-ae2b-cea707990167_36b818cb-a4e7-4cde-b290-576e92a76d22, sub_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, vol_name:cephfs) < ""
Dec  4 05:40:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/86910b9a-b822-4f70-bcbf-6e5bf72bae29/.meta.tmp'
Dec  4 05:40:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/86910b9a-b822-4f70-bcbf-6e5bf72bae29/.meta.tmp' to config b'/volumes/_nogroup/86910b9a-b822-4f70-bcbf-6e5bf72bae29/.meta'
Dec  4 05:40:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ed47c747-af46-4672-ae2b-cea707990167_36b818cb-a4e7-4cde-b290-576e92a76d22, sub_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, vol_name:cephfs) < ""
Dec  4 05:40:35 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "86910b9a-b822-4f70-bcbf-6e5bf72bae29", "snap_name": "ed47c747-af46-4672-ae2b-cea707990167", "force": true, "format": "json"}]: dispatch
Dec  4 05:40:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ed47c747-af46-4672-ae2b-cea707990167, sub_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, vol_name:cephfs) < ""
Dec  4 05:40:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/86910b9a-b822-4f70-bcbf-6e5bf72bae29/.meta.tmp'
Dec  4 05:40:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/86910b9a-b822-4f70-bcbf-6e5bf72bae29/.meta.tmp' to config b'/volumes/_nogroup/86910b9a-b822-4f70-bcbf-6e5bf72bae29/.meta'
Dec  4 05:40:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ed47c747-af46-4672-ae2b-cea707990167, sub_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, vol_name:cephfs) < ""
Dec  4 05:40:36 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Dec  4 05:40:36 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Dec  4 05:40:36 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Dec  4 05:40:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:40:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:40:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:40:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:40:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:40:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:40:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:40:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:40:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:40:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:40:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006670355214623647 of space, bias 1.0, pg target 0.2001106564387094 quantized to 32 (current 32)
Dec  4 05:40:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:40:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 3.9761890973508873e-05 of space, bias 4.0, pg target 0.04771426916821065 quantized to 16 (current 32)
Dec  4 05:40:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:40:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00017169491111545225 quantized to 32 (current 32)
Dec  4 05:40:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:40:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:40:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:40:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 05:40:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:40:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:40:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:40:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 05:40:37 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Dec  4 05:40:37 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Dec  4 05:40:37 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Dec  4 05:40:37 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v909: 321 pgs: 321 active+clean; 44 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s wr, 10 op/s
Dec  4 05:40:39 np0005545273 podman[250880]: 2025-12-04 10:40:39.012458506 +0000 UTC m=+0.077935969 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:40:39 np0005545273 podman[250879]: 2025-12-04 10:40:39.02117632 +0000 UTC m=+0.120475654 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  4 05:40:39 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "369e894d-504a-4bdd-99b2-2e34e29db9b4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:40:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:369e894d-504a-4bdd-99b2-2e34e29db9b4, vol_name:cephfs) < ""
Dec  4 05:40:39 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/369e894d-504a-4bdd-99b2-2e34e29db9b4/18a0c018-b882-484c-9916-531fa9d043b1'.
Dec  4 05:40:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/369e894d-504a-4bdd-99b2-2e34e29db9b4/.meta.tmp'
Dec  4 05:40:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/369e894d-504a-4bdd-99b2-2e34e29db9b4/.meta.tmp' to config b'/volumes/_nogroup/369e894d-504a-4bdd-99b2-2e34e29db9b4/.meta'
Dec  4 05:40:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:369e894d-504a-4bdd-99b2-2e34e29db9b4, vol_name:cephfs) < ""
Dec  4 05:40:39 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "369e894d-504a-4bdd-99b2-2e34e29db9b4", "format": "json"}]: dispatch
Dec  4 05:40:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:369e894d-504a-4bdd-99b2-2e34e29db9b4, vol_name:cephfs) < ""
Dec  4 05:40:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:369e894d-504a-4bdd-99b2-2e34e29db9b4, vol_name:cephfs) < ""
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:39.286815) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844839286852, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 696, "num_deletes": 256, "total_data_size": 800292, "memory_usage": 814552, "flush_reason": "Manual Compaction"}
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844839293564, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 793076, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18961, "largest_seqno": 19656, "table_properties": {"data_size": 789199, "index_size": 1593, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 8982, "raw_average_key_size": 19, "raw_value_size": 781183, "raw_average_value_size": 1679, "num_data_blocks": 71, "num_entries": 465, "num_filter_entries": 465, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764844810, "oldest_key_time": 1764844810, "file_creation_time": 1764844839, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 6795 microseconds, and 3425 cpu microseconds.
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:39.293607) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 793076 bytes OK
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:39.293625) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:39.295122) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:39.295139) EVENT_LOG_v1 {"time_micros": 1764844839295135, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:39.295159) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 796460, prev total WAL file size 796460, number of live WAL files 2.
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:39.295648) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323530' seq:72057594037927935, type:22 .. '6C6F676D00353032' seq:0, type:0; will stop at (end)
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(774KB)], [44(6514KB)]
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844839295747, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7463801, "oldest_snapshot_seqno": -1}
Dec  4 05:40:39 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "86910b9a-b822-4f70-bcbf-6e5bf72bae29", "format": "json"}]: dispatch
Dec  4 05:40:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:40:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:40:39 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '86910b9a-b822-4f70-bcbf-6e5bf72bae29' of type subvolume
Dec  4 05:40:39 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:39.342+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '86910b9a-b822-4f70-bcbf-6e5bf72bae29' of type subvolume
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4297 keys, 7337193 bytes, temperature: kUnknown
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844839346937, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7337193, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7307969, "index_size": 17402, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10757, "raw_key_size": 106919, "raw_average_key_size": 24, "raw_value_size": 7229683, "raw_average_value_size": 1682, "num_data_blocks": 728, "num_entries": 4297, "num_filter_entries": 4297, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 1764844839, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:40:39 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v911: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1008 B/s rd, 128 KiB/s wr, 16 op/s
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  4 05:40:39 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "86910b9a-b822-4f70-bcbf-6e5bf72bae29", "force": true, "format": "json"}]: dispatch
Dec  4 05:40:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, vol_name:cephfs) < ""
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:39.347498) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7337193 bytes
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:39.350846) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.1 rd, 142.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 6.4 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(18.7) write-amplify(9.3) OK, records in: 4828, records dropped: 531 output_compression: NoCompression
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:39.350890) EVENT_LOG_v1 {"time_micros": 1764844839350871, "job": 22, "event": "compaction_finished", "compaction_time_micros": 51456, "compaction_time_cpu_micros": 19940, "output_level": 6, "num_output_files": 1, "total_output_size": 7337193, "num_input_records": 4828, "num_output_records": 4297, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844839351746, "job": 22, "event": "table_file_deletion", "file_number": 46}
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844839354721, "job": 22, "event": "table_file_deletion", "file_number": 44}
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:39.295492) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:39.354797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:39.354804) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:39.354807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:39.354808) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:40:39 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:40:39.354810) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:40:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/86910b9a-b822-4f70-bcbf-6e5bf72bae29'' moved to trashcan
Dec  4 05:40:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:40:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:86910b9a-b822-4f70-bcbf-6e5bf72bae29, vol_name:cephfs) < ""
Dec  4 05:40:39 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "format": "json"}]: dispatch
Dec  4 05:40:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:40:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:40:39 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8589c6fa-15d7-4a25-a420-527b5f3ec7d3' of type subvolume
Dec  4 05:40:39 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:39.623+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8589c6fa-15d7-4a25-a420-527b5f3ec7d3' of type subvolume
Dec  4 05:40:39 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8589c6fa-15d7-4a25-a420-527b5f3ec7d3", "force": true, "format": "json"}]: dispatch
Dec  4 05:40:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:40:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8589c6fa-15d7-4a25-a420-527b5f3ec7d3'' moved to trashcan
Dec  4 05:40:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:40:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8589c6fa-15d7-4a25-a420-527b5f3ec7d3, vol_name:cephfs) < ""
Dec  4 05:40:40 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Dec  4 05:40:40 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Dec  4 05:40:40 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Dec  4 05:40:41 np0005545273 nova_compute[244644]: 2025-12-04 10:40:41.337 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:40:41 np0005545273 nova_compute[244644]: 2025-12-04 10:40:41.338 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  4 05:40:41 np0005545273 nova_compute[244644]: 2025-12-04 10:40:41.338 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  4 05:40:41 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v913: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 811 B/s rd, 59 KiB/s wr, 8 op/s
Dec  4 05:40:41 np0005545273 nova_compute[244644]: 2025-12-04 10:40:41.414 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  4 05:40:41 np0005545273 nova_compute[244644]: 2025-12-04 10:40:41.415 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:40:41 np0005545273 nova_compute[244644]: 2025-12-04 10:40:41.415 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:40:41 np0005545273 nova_compute[244644]: 2025-12-04 10:40:41.415 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:40:41 np0005545273 nova_compute[244644]: 2025-12-04 10:40:41.445 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:40:41 np0005545273 nova_compute[244644]: 2025-12-04 10:40:41.446 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:40:41 np0005545273 nova_compute[244644]: 2025-12-04 10:40:41.446 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:40:41 np0005545273 nova_compute[244644]: 2025-12-04 10:40:41.446 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  4 05:40:41 np0005545273 nova_compute[244644]: 2025-12-04 10:40:41.447 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:40:41 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:40:41 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2738397721' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:40:42 np0005545273 nova_compute[244644]: 2025-12-04 10:40:42.009 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:40:42 np0005545273 nova_compute[244644]: 2025-12-04 10:40:42.176 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  4 05:40:42 np0005545273 nova_compute[244644]: 2025-12-04 10:40:42.178 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5077MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  4 05:40:42 np0005545273 nova_compute[244644]: 2025-12-04 10:40:42.179 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:40:42 np0005545273 nova_compute[244644]: 2025-12-04 10:40:42.179 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:40:42 np0005545273 nova_compute[244644]: 2025-12-04 10:40:42.323 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  4 05:40:42 np0005545273 nova_compute[244644]: 2025-12-04 10:40:42.323 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  4 05:40:42 np0005545273 nova_compute[244644]: 2025-12-04 10:40:42.343 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:40:42 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:40:42 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3187427323' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:40:42 np0005545273 nova_compute[244644]: 2025-12-04 10:40:42.964 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:40:42 np0005545273 nova_compute[244644]: 2025-12-04 10:40:42.971 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  4 05:40:43 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v914: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 50 KiB/s wr, 11 op/s
Dec  4 05:40:43 np0005545273 nova_compute[244644]: 2025-12-04 10:40:43.794 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  4 05:40:43 np0005545273 nova_compute[244644]: 2025-12-04 10:40:43.796 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  4 05:40:43 np0005545273 nova_compute[244644]: 2025-12-04 10:40:43.796 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:40:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:40:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Dec  4 05:40:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Dec  4 05:40:44 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Dec  4 05:40:44 np0005545273 nova_compute[244644]: 2025-12-04 10:40:44.792 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:40:44 np0005545273 nova_compute[244644]: 2025-12-04 10:40:44.792 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:40:44 np0005545273 nova_compute[244644]: 2025-12-04 10:40:44.792 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:40:44 np0005545273 nova_compute[244644]: 2025-12-04 10:40:44.793 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:40:45 np0005545273 nova_compute[244644]: 2025-12-04 10:40:45.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:40:45 np0005545273 nova_compute[244644]: 2025-12-04 10:40:45.338 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  4 05:40:45 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v916: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 843 B/s rd, 48 KiB/s wr, 6 op/s
Dec  4 05:40:45 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "369e894d-504a-4bdd-99b2-2e34e29db9b4", "format": "json"}]: dispatch
Dec  4 05:40:45 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:369e894d-504a-4bdd-99b2-2e34e29db9b4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:40:45 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:369e894d-504a-4bdd-99b2-2e34e29db9b4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:40:45 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:45.769+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '369e894d-504a-4bdd-99b2-2e34e29db9b4' of type subvolume
Dec  4 05:40:45 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '369e894d-504a-4bdd-99b2-2e34e29db9b4' of type subvolume
Dec  4 05:40:45 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "369e894d-504a-4bdd-99b2-2e34e29db9b4", "force": true, "format": "json"}]: dispatch
Dec  4 05:40:45 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:369e894d-504a-4bdd-99b2-2e34e29db9b4, vol_name:cephfs) < ""
Dec  4 05:40:45 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/369e894d-504a-4bdd-99b2-2e34e29db9b4'' moved to trashcan
Dec  4 05:40:45 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:40:45 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:369e894d-504a-4bdd-99b2-2e34e29db9b4, vol_name:cephfs) < ""
Dec  4 05:40:47 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v917: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 55 KiB/s wr, 6 op/s
Dec  4 05:40:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:40:49 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v918: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 564 B/s rd, 49 KiB/s wr, 6 op/s
Dec  4 05:40:51 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v919: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 18 KiB/s wr, 4 op/s
Dec  4 05:40:51 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d4a5cb54-f925-4ec3-ad46-31a41be6ac58", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:40:51 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d4a5cb54-f925-4ec3-ad46-31a41be6ac58, vol_name:cephfs) < ""
Dec  4 05:40:51 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/d4a5cb54-f925-4ec3-ad46-31a41be6ac58/2101def7-9ea1-4e61-bdd7-4cd9a9dd7b54'.
Dec  4 05:40:51 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d4a5cb54-f925-4ec3-ad46-31a41be6ac58/.meta.tmp'
Dec  4 05:40:51 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d4a5cb54-f925-4ec3-ad46-31a41be6ac58/.meta.tmp' to config b'/volumes/_nogroup/d4a5cb54-f925-4ec3-ad46-31a41be6ac58/.meta'
Dec  4 05:40:51 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d4a5cb54-f925-4ec3-ad46-31a41be6ac58, vol_name:cephfs) < ""
Dec  4 05:40:51 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d4a5cb54-f925-4ec3-ad46-31a41be6ac58", "format": "json"}]: dispatch
Dec  4 05:40:51 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d4a5cb54-f925-4ec3-ad46-31a41be6ac58, vol_name:cephfs) < ""
Dec  4 05:40:51 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d4a5cb54-f925-4ec3-ad46-31a41be6ac58, vol_name:cephfs) < ""
Dec  4 05:40:51 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:40:51 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:40:53 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v920: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 18 KiB/s wr, 3 op/s
Dec  4 05:40:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:40:54 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8fa51969-52d7-4794-a864-cda7f0a42b93", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:40:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8fa51969-52d7-4794-a864-cda7f0a42b93, vol_name:cephfs) < ""
Dec  4 05:40:54 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/8fa51969-52d7-4794-a864-cda7f0a42b93/869f5edd-b477-4fc8-89df-7313aed09736'.
Dec  4 05:40:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8fa51969-52d7-4794-a864-cda7f0a42b93/.meta.tmp'
Dec  4 05:40:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8fa51969-52d7-4794-a864-cda7f0a42b93/.meta.tmp' to config b'/volumes/_nogroup/8fa51969-52d7-4794-a864-cda7f0a42b93/.meta'
Dec  4 05:40:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8fa51969-52d7-4794-a864-cda7f0a42b93, vol_name:cephfs) < ""
Dec  4 05:40:54 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8fa51969-52d7-4794-a864-cda7f0a42b93", "format": "json"}]: dispatch
Dec  4 05:40:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8fa51969-52d7-4794-a864-cda7f0a42b93, vol_name:cephfs) < ""
Dec  4 05:40:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8fa51969-52d7-4794-a864-cda7f0a42b93, vol_name:cephfs) < ""
Dec  4 05:40:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:40:54 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:40:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:40:54.906 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:40:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:40:54.906 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:40:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:40:54.906 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:40:55 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v921: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 184 B/s rd, 16 KiB/s wr, 2 op/s
Dec  4 05:40:57 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v922: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 30 KiB/s wr, 3 op/s
Dec  4 05:40:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:40:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:40:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:40:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:40:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:40:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:40:58 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d4a5cb54-f925-4ec3-ad46-31a41be6ac58", "format": "json"}]: dispatch
Dec  4 05:40:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d4a5cb54-f925-4ec3-ad46-31a41be6ac58, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:40:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d4a5cb54-f925-4ec3-ad46-31a41be6ac58, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:40:58 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:40:58.413+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd4a5cb54-f925-4ec3-ad46-31a41be6ac58' of type subvolume
Dec  4 05:40:58 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd4a5cb54-f925-4ec3-ad46-31a41be6ac58' of type subvolume
Dec  4 05:40:58 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d4a5cb54-f925-4ec3-ad46-31a41be6ac58", "force": true, "format": "json"}]: dispatch
Dec  4 05:40:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d4a5cb54-f925-4ec3-ad46-31a41be6ac58, vol_name:cephfs) < ""
Dec  4 05:40:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d4a5cb54-f925-4ec3-ad46-31a41be6ac58'' moved to trashcan
Dec  4 05:40:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:40:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d4a5cb54-f925-4ec3-ad46-31a41be6ac58, vol_name:cephfs) < ""
Dec  4 05:40:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:40:59 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v923: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 17 KiB/s wr, 3 op/s
Dec  4 05:40:59 np0005545273 podman[250970]: 2025-12-04 10:40:59.949230643 +0000 UTC m=+0.055305261 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  4 05:41:01 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v924: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 34 KiB/s wr, 3 op/s
Dec  4 05:41:03 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v925: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 32 KiB/s wr, 4 op/s
Dec  4 05:41:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:41:05 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v926: 321 pgs: 321 active+clean; 44 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 32 KiB/s wr, 3 op/s
Dec  4 05:41:06 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:41:06.885 156095 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'aa:78:67', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:d2:c7:24:ee:78'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  4 05:41:06 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:41:06.887 156095 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  4 05:41:07 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v927: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 37 KiB/s wr, 4 op/s
Dec  4 05:41:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:41:09 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v928: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 23 KiB/s wr, 3 op/s
Dec  4 05:41:09 np0005545273 podman[250995]: 2025-12-04 10:41:09.954233273 +0000 UTC m=+0.055308714 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  4 05:41:09 np0005545273 podman[250994]: 2025-12-04 10:41:09.981799847 +0000 UTC m=+0.087773269 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Dec  4 05:41:10 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:41:10.889 156095 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=565580d5-3422-4e11-b563-3f1a3db67238, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  4 05:41:11 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v929: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 22 KiB/s wr, 2 op/s
Dec  4 05:41:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  4 05:41:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1962865581' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec  4 05:41:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  4 05:41:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1962865581' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec  4 05:41:12 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e221725b-e6e8-4c35-9638-fa0fd11665ad", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:41:12 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e221725b-e6e8-4c35-9638-fa0fd11665ad, vol_name:cephfs) < ""
Dec  4 05:41:12 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/e221725b-e6e8-4c35-9638-fa0fd11665ad/b30d8655-9d07-48f5-9b2a-5c00b9d7715b'.
Dec  4 05:41:12 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e221725b-e6e8-4c35-9638-fa0fd11665ad/.meta.tmp'
Dec  4 05:41:12 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e221725b-e6e8-4c35-9638-fa0fd11665ad/.meta.tmp' to config b'/volumes/_nogroup/e221725b-e6e8-4c35-9638-fa0fd11665ad/.meta'
Dec  4 05:41:12 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e221725b-e6e8-4c35-9638-fa0fd11665ad, vol_name:cephfs) < ""
Dec  4 05:41:12 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e221725b-e6e8-4c35-9638-fa0fd11665ad", "format": "json"}]: dispatch
Dec  4 05:41:12 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e221725b-e6e8-4c35-9638-fa0fd11665ad, vol_name:cephfs) < ""
Dec  4 05:41:12 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e221725b-e6e8-4c35-9638-fa0fd11665ad, vol_name:cephfs) < ""
Dec  4 05:41:12 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:41:12 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:41:13 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v930: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 5.7 KiB/s wr, 1 op/s
Dec  4 05:41:13 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8fa51969-52d7-4794-a864-cda7f0a42b93", "format": "json"}]: dispatch
Dec  4 05:41:13 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8fa51969-52d7-4794-a864-cda7f0a42b93, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:41:13 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8fa51969-52d7-4794-a864-cda7f0a42b93, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:41:13 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8fa51969-52d7-4794-a864-cda7f0a42b93' of type subvolume
Dec  4 05:41:13 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:41:13.699+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8fa51969-52d7-4794-a864-cda7f0a42b93' of type subvolume
Dec  4 05:41:13 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8fa51969-52d7-4794-a864-cda7f0a42b93", "force": true, "format": "json"}]: dispatch
Dec  4 05:41:13 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8fa51969-52d7-4794-a864-cda7f0a42b93, vol_name:cephfs) < ""
Dec  4 05:41:13 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8fa51969-52d7-4794-a864-cda7f0a42b93'' moved to trashcan
Dec  4 05:41:13 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:41:13 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8fa51969-52d7-4794-a864-cda7f0a42b93, vol_name:cephfs) < ""
Dec  4 05:41:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:41:15 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v931: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s wr, 0 op/s
Dec  4 05:41:16 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3a8de81a-77b5-415f-8412-5f7da4d28502", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:41:16 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3a8de81a-77b5-415f-8412-5f7da4d28502, vol_name:cephfs) < ""
Dec  4 05:41:16 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/3a8de81a-77b5-415f-8412-5f7da4d28502/b0e8816f-0808-444b-8920-ec78ecd56640'.
Dec  4 05:41:16 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3a8de81a-77b5-415f-8412-5f7da4d28502/.meta.tmp'
Dec  4 05:41:16 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3a8de81a-77b5-415f-8412-5f7da4d28502/.meta.tmp' to config b'/volumes/_nogroup/3a8de81a-77b5-415f-8412-5f7da4d28502/.meta'
Dec  4 05:41:16 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3a8de81a-77b5-415f-8412-5f7da4d28502, vol_name:cephfs) < ""
Dec  4 05:41:16 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3a8de81a-77b5-415f-8412-5f7da4d28502", "format": "json"}]: dispatch
Dec  4 05:41:16 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3a8de81a-77b5-415f-8412-5f7da4d28502, vol_name:cephfs) < ""
Dec  4 05:41:16 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3a8de81a-77b5-415f-8412-5f7da4d28502, vol_name:cephfs) < ""
Dec  4 05:41:16 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:41:16 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:41:17 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:41:17 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:41:17 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:41:17 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:41:17 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:41:17 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:41:17 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:41:17 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:41:17 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:41:17 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:41:17 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:41:17 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:41:17 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v932: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s wr, 1 op/s
Dec  4 05:41:17 np0005545273 podman[251183]: 2025-12-04 10:41:17.729674736 +0000 UTC m=+0.060501922 container create 135e9e41cc2f97bfa78842ab447638e4d5b64e7ee2bec97623f1639965ad84e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shannon, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec  4 05:41:17 np0005545273 podman[251183]: 2025-12-04 10:41:17.692703579 +0000 UTC m=+0.023530785 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:41:17 np0005545273 systemd[1]: Started libpod-conmon-135e9e41cc2f97bfa78842ab447638e4d5b64e7ee2bec97623f1639965ad84e8.scope.
Dec  4 05:41:17 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:41:17 np0005545273 podman[251183]: 2025-12-04 10:41:17.978826678 +0000 UTC m=+0.309653944 container init 135e9e41cc2f97bfa78842ab447638e4d5b64e7ee2bec97623f1639965ad84e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  4 05:41:17 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:41:17 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:41:17 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:41:17 np0005545273 podman[251183]: 2025-12-04 10:41:17.990058996 +0000 UTC m=+0.320886222 container start 135e9e41cc2f97bfa78842ab447638e4d5b64e7ee2bec97623f1639965ad84e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shannon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec  4 05:41:17 np0005545273 podman[251183]: 2025-12-04 10:41:17.994061566 +0000 UTC m=+0.324888832 container attach 135e9e41cc2f97bfa78842ab447638e4d5b64e7ee2bec97623f1639965ad84e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  4 05:41:18 np0005545273 zen_shannon[251199]: 167 167
Dec  4 05:41:18 np0005545273 systemd[1]: libpod-135e9e41cc2f97bfa78842ab447638e4d5b64e7ee2bec97623f1639965ad84e8.scope: Deactivated successfully.
Dec  4 05:41:18 np0005545273 podman[251183]: 2025-12-04 10:41:18.002798502 +0000 UTC m=+0.333625688 container died 135e9e41cc2f97bfa78842ab447638e4d5b64e7ee2bec97623f1639965ad84e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shannon, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:41:18 np0005545273 systemd[1]: var-lib-containers-storage-overlay-16a01b6324d1fadece11fb4f237fe6f99e99759b1e183e52bd44461e1321e126-merged.mount: Deactivated successfully.
Dec  4 05:41:18 np0005545273 podman[251183]: 2025-12-04 10:41:18.327905209 +0000 UTC m=+0.658732405 container remove 135e9e41cc2f97bfa78842ab447638e4d5b64e7ee2bec97623f1639965ad84e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030)
Dec  4 05:41:18 np0005545273 systemd[1]: libpod-conmon-135e9e41cc2f97bfa78842ab447638e4d5b64e7ee2bec97623f1639965ad84e8.scope: Deactivated successfully.
Dec  4 05:41:18 np0005545273 podman[251221]: 2025-12-04 10:41:18.479524792 +0000 UTC m=+0.026061608 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:41:18 np0005545273 podman[251221]: 2025-12-04 10:41:18.61770773 +0000 UTC m=+0.164244536 container create 472d67252787fef151978533259a8976fb4d16d38376e839e682a9013f37d45b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:41:18 np0005545273 systemd[1]: Started libpod-conmon-472d67252787fef151978533259a8976fb4d16d38376e839e682a9013f37d45b.scope.
Dec  4 05:41:18 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:41:18 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad238bba0d220f7cae769b668948adbbe9a969150fd5ae1b881dc253c82f5d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:41:18 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad238bba0d220f7cae769b668948adbbe9a969150fd5ae1b881dc253c82f5d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:41:18 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad238bba0d220f7cae769b668948adbbe9a969150fd5ae1b881dc253c82f5d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:41:18 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad238bba0d220f7cae769b668948adbbe9a969150fd5ae1b881dc253c82f5d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:41:18 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad238bba0d220f7cae769b668948adbbe9a969150fd5ae1b881dc253c82f5d6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:41:19 np0005545273 podman[251221]: 2025-12-04 10:41:19.113515213 +0000 UTC m=+0.660052019 container init 472d67252787fef151978533259a8976fb4d16d38376e839e682a9013f37d45b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_spence, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  4 05:41:19 np0005545273 podman[251221]: 2025-12-04 10:41:19.120352012 +0000 UTC m=+0.666888808 container start 472d67252787fef151978533259a8976fb4d16d38376e839e682a9013f37d45b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_spence, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  4 05:41:19 np0005545273 podman[251221]: 2025-12-04 10:41:19.186026282 +0000 UTC m=+0.732563098 container attach 472d67252787fef151978533259a8976fb4d16d38376e839e682a9013f37d45b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_spence, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec  4 05:41:19 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:41:19 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v933: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 15 KiB/s wr, 2 op/s
Dec  4 05:41:19 np0005545273 magical_spence[251238]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:41:19 np0005545273 magical_spence[251238]: --> All data devices are unavailable
Dec  4 05:41:19 np0005545273 systemd[1]: libpod-472d67252787fef151978533259a8976fb4d16d38376e839e682a9013f37d45b.scope: Deactivated successfully.
Dec  4 05:41:19 np0005545273 podman[251221]: 2025-12-04 10:41:19.622619525 +0000 UTC m=+1.169156351 container died 472d67252787fef151978533259a8976fb4d16d38376e839e682a9013f37d45b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_spence, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec  4 05:41:20 np0005545273 systemd[1]: var-lib-containers-storage-overlay-dad238bba0d220f7cae769b668948adbbe9a969150fd5ae1b881dc253c82f5d6-merged.mount: Deactivated successfully.
Dec  4 05:41:20 np0005545273 podman[251221]: 2025-12-04 10:41:20.208996215 +0000 UTC m=+1.755533001 container remove 472d67252787fef151978533259a8976fb4d16d38376e839e682a9013f37d45b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_spence, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:41:20 np0005545273 systemd[1]: libpod-conmon-472d67252787fef151978533259a8976fb4d16d38376e839e682a9013f37d45b.scope: Deactivated successfully.
Dec  4 05:41:20 np0005545273 podman[251331]: 2025-12-04 10:41:20.647519816 +0000 UTC m=+0.023260907 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:41:20 np0005545273 podman[251331]: 2025-12-04 10:41:20.839972042 +0000 UTC m=+0.215713123 container create 8eb8f478a1e11615c1f87b2bc2592105993c66edbb0aaef6a653b531f8f9047f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_shamir, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  4 05:41:20 np0005545273 systemd[1]: Started libpod-conmon-8eb8f478a1e11615c1f87b2bc2592105993c66edbb0aaef6a653b531f8f9047f.scope.
Dec  4 05:41:20 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:41:21 np0005545273 podman[251331]: 2025-12-04 10:41:21.279885007 +0000 UTC m=+0.655626128 container init 8eb8f478a1e11615c1f87b2bc2592105993c66edbb0aaef6a653b531f8f9047f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_shamir, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  4 05:41:21 np0005545273 podman[251331]: 2025-12-04 10:41:21.287130437 +0000 UTC m=+0.662871518 container start 8eb8f478a1e11615c1f87b2bc2592105993c66edbb0aaef6a653b531f8f9047f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_shamir, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:41:21 np0005545273 relaxed_shamir[251348]: 167 167
Dec  4 05:41:21 np0005545273 systemd[1]: libpod-8eb8f478a1e11615c1f87b2bc2592105993c66edbb0aaef6a653b531f8f9047f.scope: Deactivated successfully.
Dec  4 05:41:21 np0005545273 podman[251331]: 2025-12-04 10:41:21.347507375 +0000 UTC m=+0.723248466 container attach 8eb8f478a1e11615c1f87b2bc2592105993c66edbb0aaef6a653b531f8f9047f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_shamir, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  4 05:41:21 np0005545273 podman[251331]: 2025-12-04 10:41:21.34811838 +0000 UTC m=+0.723859461 container died 8eb8f478a1e11615c1f87b2bc2592105993c66edbb0aaef6a653b531f8f9047f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_shamir, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  4 05:41:21 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v934: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 30 KiB/s wr, 3 op/s
Dec  4 05:41:21 np0005545273 systemd[1]: var-lib-containers-storage-overlay-33ee2a9d1d4c70e72deed30ae2b1b27310e5ee971a4008b60ff650e8b04ee734-merged.mount: Deactivated successfully.
Dec  4 05:41:21 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3a8de81a-77b5-415f-8412-5f7da4d28502", "format": "json"}]: dispatch
Dec  4 05:41:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3a8de81a-77b5-415f-8412-5f7da4d28502, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:41:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3a8de81a-77b5-415f-8412-5f7da4d28502, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:41:21 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3a8de81a-77b5-415f-8412-5f7da4d28502' of type subvolume
Dec  4 05:41:21 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:41:21.702+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3a8de81a-77b5-415f-8412-5f7da4d28502' of type subvolume
Dec  4 05:41:21 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3a8de81a-77b5-415f-8412-5f7da4d28502", "force": true, "format": "json"}]: dispatch
Dec  4 05:41:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3a8de81a-77b5-415f-8412-5f7da4d28502, vol_name:cephfs) < ""
Dec  4 05:41:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3a8de81a-77b5-415f-8412-5f7da4d28502'' moved to trashcan
Dec  4 05:41:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:41:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3a8de81a-77b5-415f-8412-5f7da4d28502, vol_name:cephfs) < ""
Dec  4 05:41:21 np0005545273 podman[251331]: 2025-12-04 10:41:21.805965751 +0000 UTC m=+1.181706812 container remove 8eb8f478a1e11615c1f87b2bc2592105993c66edbb0aaef6a653b531f8f9047f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  4 05:41:21 np0005545273 systemd[1]: libpod-conmon-8eb8f478a1e11615c1f87b2bc2592105993c66edbb0aaef6a653b531f8f9047f.scope: Deactivated successfully.
Dec  4 05:41:22 np0005545273 podman[251373]: 2025-12-04 10:41:22.022388111 +0000 UTC m=+0.089887771 container create 7a204ce8d1f058d1bd45e9abb061ed41cfa54581016b520068cba70d229145a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  4 05:41:22 np0005545273 podman[251373]: 2025-12-04 10:41:21.95665491 +0000 UTC m=+0.024154590 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:41:22 np0005545273 systemd[1]: Started libpod-conmon-7a204ce8d1f058d1bd45e9abb061ed41cfa54581016b520068cba70d229145a8.scope.
Dec  4 05:41:22 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:41:22 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc8f36fba44bb6a00270b37d5f313af816a2a31f3cca9f508a5ad280787b411c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:41:22 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc8f36fba44bb6a00270b37d5f313af816a2a31f3cca9f508a5ad280787b411c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:41:22 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc8f36fba44bb6a00270b37d5f313af816a2a31f3cca9f508a5ad280787b411c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:41:22 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc8f36fba44bb6a00270b37d5f313af816a2a31f3cca9f508a5ad280787b411c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:41:22 np0005545273 podman[251373]: 2025-12-04 10:41:22.352886071 +0000 UTC m=+0.420385741 container init 7a204ce8d1f058d1bd45e9abb061ed41cfa54581016b520068cba70d229145a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_thompson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:41:22 np0005545273 podman[251373]: 2025-12-04 10:41:22.360329076 +0000 UTC m=+0.427828736 container start 7a204ce8d1f058d1bd45e9abb061ed41cfa54581016b520068cba70d229145a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_thompson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:41:22 np0005545273 podman[251373]: 2025-12-04 10:41:22.364754066 +0000 UTC m=+0.432253736 container attach 7a204ce8d1f058d1bd45e9abb061ed41cfa54581016b520068cba70d229145a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_thompson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]: {
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:    "0": [
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:        {
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            "devices": [
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "/dev/loop3"
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            ],
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            "lv_name": "ceph_lv0",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            "lv_size": "21470642176",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            "name": "ceph_lv0",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            "tags": {
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.cluster_name": "ceph",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.crush_device_class": "",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.encrypted": "0",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.objectstore": "bluestore",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.osd_id": "0",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.type": "block",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.vdo": "0",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.with_tpm": "0"
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            },
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            "type": "block",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            "vg_name": "ceph_vg0"
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:        }
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:    ],
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:    "1": [
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:        {
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            "devices": [
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "/dev/loop4"
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            ],
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            "lv_name": "ceph_lv1",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            "lv_size": "21470642176",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            "name": "ceph_lv1",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            "tags": {
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.cluster_name": "ceph",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.crush_device_class": "",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.encrypted": "0",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.objectstore": "bluestore",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.osd_id": "1",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.type": "block",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.vdo": "0",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.with_tpm": "0"
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            },
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            "type": "block",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            "vg_name": "ceph_vg1"
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:        }
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:    ],
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:    "2": [
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:        {
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            "devices": [
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "/dev/loop5"
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            ],
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            "lv_name": "ceph_lv2",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            "lv_size": "21470642176",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            "name": "ceph_lv2",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            "tags": {
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.cluster_name": "ceph",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.crush_device_class": "",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.encrypted": "0",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.objectstore": "bluestore",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.osd_id": "2",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.type": "block",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.vdo": "0",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:                "ceph.with_tpm": "0"
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            },
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            "type": "block",
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:            "vg_name": "ceph_vg2"
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:        }
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]:    ]
Dec  4 05:41:22 np0005545273 eloquent_thompson[251389]: }
Dec  4 05:41:22 np0005545273 systemd[1]: libpod-7a204ce8d1f058d1bd45e9abb061ed41cfa54581016b520068cba70d229145a8.scope: Deactivated successfully.
Dec  4 05:41:22 np0005545273 podman[251373]: 2025-12-04 10:41:22.678823789 +0000 UTC m=+0.746323449 container died 7a204ce8d1f058d1bd45e9abb061ed41cfa54581016b520068cba70d229145a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_thompson, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec  4 05:41:22 np0005545273 systemd[1]: var-lib-containers-storage-overlay-fc8f36fba44bb6a00270b37d5f313af816a2a31f3cca9f508a5ad280787b411c-merged.mount: Deactivated successfully.
Dec  4 05:41:22 np0005545273 podman[251373]: 2025-12-04 10:41:22.726740038 +0000 UTC m=+0.794239698 container remove 7a204ce8d1f058d1bd45e9abb061ed41cfa54581016b520068cba70d229145a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:41:22 np0005545273 systemd[1]: libpod-conmon-7a204ce8d1f058d1bd45e9abb061ed41cfa54581016b520068cba70d229145a8.scope: Deactivated successfully.
Dec  4 05:41:23 np0005545273 podman[251473]: 2025-12-04 10:41:23.275452793 +0000 UTC m=+0.103150011 container create 2e2c7ccd6b6a74dd147ee69c69dcfce17b5f52db272336e8aca2a4627c877de7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_beaver, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Dec  4 05:41:23 np0005545273 podman[251473]: 2025-12-04 10:41:23.195711414 +0000 UTC m=+0.023408652 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:41:23 np0005545273 systemd[1]: Started libpod-conmon-2e2c7ccd6b6a74dd147ee69c69dcfce17b5f52db272336e8aca2a4627c877de7.scope.
Dec  4 05:41:23 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:41:23 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v935: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 30 KiB/s wr, 3 op/s
Dec  4 05:41:23 np0005545273 podman[251473]: 2025-12-04 10:41:23.399625424 +0000 UTC m=+0.227322662 container init 2e2c7ccd6b6a74dd147ee69c69dcfce17b5f52db272336e8aca2a4627c877de7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_beaver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec  4 05:41:23 np0005545273 podman[251473]: 2025-12-04 10:41:23.407681794 +0000 UTC m=+0.235379012 container start 2e2c7ccd6b6a74dd147ee69c69dcfce17b5f52db272336e8aca2a4627c877de7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_beaver, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:41:23 np0005545273 epic_beaver[251490]: 167 167
Dec  4 05:41:23 np0005545273 systemd[1]: libpod-2e2c7ccd6b6a74dd147ee69c69dcfce17b5f52db272336e8aca2a4627c877de7.scope: Deactivated successfully.
Dec  4 05:41:23 np0005545273 podman[251473]: 2025-12-04 10:41:23.417009985 +0000 UTC m=+0.244707223 container attach 2e2c7ccd6b6a74dd147ee69c69dcfce17b5f52db272336e8aca2a4627c877de7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_beaver, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:41:23 np0005545273 podman[251473]: 2025-12-04 10:41:23.417602131 +0000 UTC m=+0.245299359 container died 2e2c7ccd6b6a74dd147ee69c69dcfce17b5f52db272336e8aca2a4627c877de7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:41:23 np0005545273 systemd[1]: var-lib-containers-storage-overlay-8d7ea25e8d4dd288d21e248a1008d9b3541759c76959e29ac3aebb92144c7041-merged.mount: Deactivated successfully.
Dec  4 05:41:23 np0005545273 podman[251473]: 2025-12-04 10:41:23.546599491 +0000 UTC m=+0.374296709 container remove 2e2c7ccd6b6a74dd147ee69c69dcfce17b5f52db272336e8aca2a4627c877de7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_beaver, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec  4 05:41:23 np0005545273 systemd[1]: libpod-conmon-2e2c7ccd6b6a74dd147ee69c69dcfce17b5f52db272336e8aca2a4627c877de7.scope: Deactivated successfully.
Dec  4 05:41:23 np0005545273 podman[251513]: 2025-12-04 10:41:23.751760322 +0000 UTC m=+0.078297324 container create f99b94ab6d98bc048c4f4346c0af7a38c57846dd29556f2208cd8cc0f38446d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_heisenberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec  4 05:41:23 np0005545273 podman[251513]: 2025-12-04 10:41:23.699729511 +0000 UTC m=+0.026266543 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:41:23 np0005545273 systemd[1]: Started libpod-conmon-f99b94ab6d98bc048c4f4346c0af7a38c57846dd29556f2208cd8cc0f38446d9.scope.
Dec  4 05:41:23 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:41:23 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1542ba02e85fd441536d51d8cd26321d1ea1ece954bc2a97cfa832aa1756906c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:41:23 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1542ba02e85fd441536d51d8cd26321d1ea1ece954bc2a97cfa832aa1756906c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:41:23 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1542ba02e85fd441536d51d8cd26321d1ea1ece954bc2a97cfa832aa1756906c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:41:23 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1542ba02e85fd441536d51d8cd26321d1ea1ece954bc2a97cfa832aa1756906c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:41:23 np0005545273 podman[251513]: 2025-12-04 10:41:23.865197136 +0000 UTC m=+0.191734158 container init f99b94ab6d98bc048c4f4346c0af7a38c57846dd29556f2208cd8cc0f38446d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  4 05:41:23 np0005545273 podman[251513]: 2025-12-04 10:41:23.873356939 +0000 UTC m=+0.199893941 container start f99b94ab6d98bc048c4f4346c0af7a38c57846dd29556f2208cd8cc0f38446d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec  4 05:41:23 np0005545273 podman[251513]: 2025-12-04 10:41:23.906690466 +0000 UTC m=+0.233227468 container attach f99b94ab6d98bc048c4f4346c0af7a38c57846dd29556f2208cd8cc0f38446d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec  4 05:41:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:41:24 np0005545273 lvm[251608]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:41:24 np0005545273 lvm[251605]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:41:24 np0005545273 lvm[251605]: VG ceph_vg0 finished
Dec  4 05:41:24 np0005545273 lvm[251608]: VG ceph_vg1 finished
Dec  4 05:41:24 np0005545273 lvm[251610]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:41:24 np0005545273 lvm[251610]: VG ceph_vg2 finished
Dec  4 05:41:24 np0005545273 lvm[251611]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:41:24 np0005545273 lvm[251611]: VG ceph_vg1 finished
Dec  4 05:41:24 np0005545273 lvm[251613]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:41:24 np0005545273 lvm[251613]: VG ceph_vg2 finished
Dec  4 05:41:24 np0005545273 hopeful_heisenberg[251529]: {}
Dec  4 05:41:24 np0005545273 systemd[1]: libpod-f99b94ab6d98bc048c4f4346c0af7a38c57846dd29556f2208cd8cc0f38446d9.scope: Deactivated successfully.
Dec  4 05:41:24 np0005545273 systemd[1]: libpod-f99b94ab6d98bc048c4f4346c0af7a38c57846dd29556f2208cd8cc0f38446d9.scope: Consumed 1.449s CPU time.
Dec  4 05:41:24 np0005545273 podman[251513]: 2025-12-04 10:41:24.766997883 +0000 UTC m=+1.093534885 container died f99b94ab6d98bc048c4f4346c0af7a38c57846dd29556f2208cd8cc0f38446d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec  4 05:41:24 np0005545273 systemd[1]: var-lib-containers-storage-overlay-1542ba02e85fd441536d51d8cd26321d1ea1ece954bc2a97cfa832aa1756906c-merged.mount: Deactivated successfully.
Dec  4 05:41:24 np0005545273 podman[251513]: 2025-12-04 10:41:24.975829284 +0000 UTC m=+1.302366286 container remove f99b94ab6d98bc048c4f4346c0af7a38c57846dd29556f2208cd8cc0f38446d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec  4 05:41:24 np0005545273 systemd[1]: libpod-conmon-f99b94ab6d98bc048c4f4346c0af7a38c57846dd29556f2208cd8cc0f38446d9.scope: Deactivated successfully.
Dec  4 05:41:25 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:41:25 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:41:25 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:41:25 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:41:25 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c1a650b0-8a39-49d0-8761-9a38bedfef6b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:41:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c1a650b0-8a39-49d0-8761-9a38bedfef6b, vol_name:cephfs) < ""
Dec  4 05:41:25 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/c1a650b0-8a39-49d0-8761-9a38bedfef6b/7ba28297-c9db-4f6b-88f7-45beda1e2ba0'.
Dec  4 05:41:25 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v936: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 30 KiB/s wr, 3 op/s
Dec  4 05:41:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c1a650b0-8a39-49d0-8761-9a38bedfef6b/.meta.tmp'
Dec  4 05:41:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c1a650b0-8a39-49d0-8761-9a38bedfef6b/.meta.tmp' to config b'/volumes/_nogroup/c1a650b0-8a39-49d0-8761-9a38bedfef6b/.meta'
Dec  4 05:41:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c1a650b0-8a39-49d0-8761-9a38bedfef6b, vol_name:cephfs) < ""
Dec  4 05:41:25 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c1a650b0-8a39-49d0-8761-9a38bedfef6b", "format": "json"}]: dispatch
Dec  4 05:41:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c1a650b0-8a39-49d0-8761-9a38bedfef6b, vol_name:cephfs) < ""
Dec  4 05:41:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c1a650b0-8a39-49d0-8761-9a38bedfef6b, vol_name:cephfs) < ""
Dec  4 05:41:25 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:41:25 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:41:26 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:41:26 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:41:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:41:26
Dec  4 05:41:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:41:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:41:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'volumes', '.mgr', '.rgw.root', 'images', 'default.rgw.log', 'default.rgw.meta', 'vms']
Dec  4 05:41:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:41:27 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v937: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 46 KiB/s wr, 4 op/s
Dec  4 05:41:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:41:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f8413aa5c40>)]
Dec  4 05:41:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec  4 05:41:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:41:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f8435ce5b80>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f84183d26a0>)]
Dec  4 05:41:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec  4 05:41:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:41:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:41:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:41:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:41:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:41:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:41:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:41:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec  4 05:41:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:41:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:41:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:41:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:41:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:41:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:41:29 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v938: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 32 KiB/s wr, 5 op/s
Dec  4 05:41:29 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.iwufnj(active, since 27m)
Dec  4 05:41:30 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c1a650b0-8a39-49d0-8761-9a38bedfef6b", "format": "json"}]: dispatch
Dec  4 05:41:30 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c1a650b0-8a39-49d0-8761-9a38bedfef6b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:41:30 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c1a650b0-8a39-49d0-8761-9a38bedfef6b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:41:30 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:41:30.617+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c1a650b0-8a39-49d0-8761-9a38bedfef6b' of type subvolume
Dec  4 05:41:30 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c1a650b0-8a39-49d0-8761-9a38bedfef6b' of type subvolume
Dec  4 05:41:30 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c1a650b0-8a39-49d0-8761-9a38bedfef6b", "force": true, "format": "json"}]: dispatch
Dec  4 05:41:30 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c1a650b0-8a39-49d0-8761-9a38bedfef6b, vol_name:cephfs) < ""
Dec  4 05:41:30 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c1a650b0-8a39-49d0-8761-9a38bedfef6b'' moved to trashcan
Dec  4 05:41:30 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:41:30 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c1a650b0-8a39-49d0-8761-9a38bedfef6b, vol_name:cephfs) < ""
Dec  4 05:41:30 np0005545273 podman[251652]: 2025-12-04 10:41:30.9652753 +0000 UTC m=+0.068657514 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  4 05:41:31 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8e063322-2225-425c-8041-94c64095457f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:41:31 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8e063322-2225-425c-8041-94c64095457f, vol_name:cephfs) < ""
Dec  4 05:41:31 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/8e063322-2225-425c-8041-94c64095457f/d8c06adf-7d8c-42f7-8e3d-861c1d60ede8'.
Dec  4 05:41:31 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8e063322-2225-425c-8041-94c64095457f/.meta.tmp'
Dec  4 05:41:31 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8e063322-2225-425c-8041-94c64095457f/.meta.tmp' to config b'/volumes/_nogroup/8e063322-2225-425c-8041-94c64095457f/.meta'
Dec  4 05:41:31 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8e063322-2225-425c-8041-94c64095457f, vol_name:cephfs) < ""
Dec  4 05:41:31 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8e063322-2225-425c-8041-94c64095457f", "format": "json"}]: dispatch
Dec  4 05:41:31 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8e063322-2225-425c-8041-94c64095457f, vol_name:cephfs) < ""
Dec  4 05:41:31 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8e063322-2225-425c-8041-94c64095457f, vol_name:cephfs) < ""
Dec  4 05:41:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:41:31 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:41:31 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v939: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 44 KiB/s wr, 4 op/s
Dec  4 05:41:33 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v940: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 29 KiB/s wr, 4 op/s
Dec  4 05:41:34 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b37d179f-5d92-4510-9538-6c9b03887871", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:41:34 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b37d179f-5d92-4510-9538-6c9b03887871, vol_name:cephfs) < ""
Dec  4 05:41:34 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/b37d179f-5d92-4510-9538-6c9b03887871/37c965e6-b9df-4d0f-8913-3188a3bb9352'.
Dec  4 05:41:34 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b37d179f-5d92-4510-9538-6c9b03887871/.meta.tmp'
Dec  4 05:41:34 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b37d179f-5d92-4510-9538-6c9b03887871/.meta.tmp' to config b'/volumes/_nogroup/b37d179f-5d92-4510-9538-6c9b03887871/.meta'
Dec  4 05:41:34 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b37d179f-5d92-4510-9538-6c9b03887871, vol_name:cephfs) < ""
Dec  4 05:41:34 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b37d179f-5d92-4510-9538-6c9b03887871", "format": "json"}]: dispatch
Dec  4 05:41:34 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b37d179f-5d92-4510-9538-6c9b03887871, vol_name:cephfs) < ""
Dec  4 05:41:34 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b37d179f-5d92-4510-9538-6c9b03887871, vol_name:cephfs) < ""
Dec  4 05:41:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:41:34 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:41:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:41:35 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v941: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 29 KiB/s wr, 3 op/s
Dec  4 05:41:36 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8e063322-2225-425c-8041-94c64095457f", "format": "json"}]: dispatch
Dec  4 05:41:36 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8e063322-2225-425c-8041-94c64095457f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:41:36 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8e063322-2225-425c-8041-94c64095457f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:41:36 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8e063322-2225-425c-8041-94c64095457f' of type subvolume
Dec  4 05:41:36 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:41:36.107+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8e063322-2225-425c-8041-94c64095457f' of type subvolume
Dec  4 05:41:36 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8e063322-2225-425c-8041-94c64095457f", "force": true, "format": "json"}]: dispatch
Dec  4 05:41:36 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8e063322-2225-425c-8041-94c64095457f, vol_name:cephfs) < ""
Dec  4 05:41:36 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8e063322-2225-425c-8041-94c64095457f'' moved to trashcan
Dec  4 05:41:36 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:41:36 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8e063322-2225-425c-8041-94c64095457f, vol_name:cephfs) < ""
Dec  4 05:41:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:41:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:41:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:41:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:41:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:41:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:41:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:41:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:41:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:41:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:41:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006670871423372434 of space, bias 1.0, pg target 0.20012614270117302 quantized to 32 (current 32)
Dec  4 05:41:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:41:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.437115334840369e-05 of space, bias 4.0, pg target 0.07724538401808442 quantized to 16 (current 32)
Dec  4 05:41:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:41:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Dec  4 05:41:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:41:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:41:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:41:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 05:41:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:41:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:41:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:41:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 05:41:37 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v942: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 41 KiB/s wr, 4 op/s
Dec  4 05:41:38 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b37d179f-5d92-4510-9538-6c9b03887871", "format": "json"}]: dispatch
Dec  4 05:41:38 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b37d179f-5d92-4510-9538-6c9b03887871, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:41:38 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b37d179f-5d92-4510-9538-6c9b03887871, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:41:38 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b37d179f-5d92-4510-9538-6c9b03887871' of type subvolume
Dec  4 05:41:38 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:41:38.311+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b37d179f-5d92-4510-9538-6c9b03887871' of type subvolume
Dec  4 05:41:38 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b37d179f-5d92-4510-9538-6c9b03887871", "force": true, "format": "json"}]: dispatch
Dec  4 05:41:38 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b37d179f-5d92-4510-9538-6c9b03887871, vol_name:cephfs) < ""
Dec  4 05:41:38 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b37d179f-5d92-4510-9538-6c9b03887871'' moved to trashcan
Dec  4 05:41:38 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:41:38 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b37d179f-5d92-4510-9538-6c9b03887871, vol_name:cephfs) < ""
Dec  4 05:41:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:41:39 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v943: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 31 KiB/s wr, 4 op/s
Dec  4 05:41:41 np0005545273 nova_compute[244644]: 2025-12-04 10:41:41.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:41:41 np0005545273 nova_compute[244644]: 2025-12-04 10:41:41.340 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  4 05:41:41 np0005545273 nova_compute[244644]: 2025-12-04 10:41:41.340 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  4 05:41:41 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v944: 321 pgs: 321 active+clean; 45 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 33 KiB/s wr, 3 op/s
Dec  4 05:41:41 np0005545273 podman[251677]: 2025-12-04 10:41:41.398259107 +0000 UTC m=+0.494216604 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  4 05:41:41 np0005545273 podman[251676]: 2025-12-04 10:41:41.410239094 +0000 UTC m=+0.504945330 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller)
Dec  4 05:41:41 np0005545273 nova_compute[244644]: 2025-12-04 10:41:41.412 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  4 05:41:41 np0005545273 nova_compute[244644]: 2025-12-04 10:41:41.413 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:41:41 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9a5f4ecd-03b6-407a-8d82-15daa95b5ced", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:41:41 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9a5f4ecd-03b6-407a-8d82-15daa95b5ced, vol_name:cephfs) < ""
Dec  4 05:41:42 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/9a5f4ecd-03b6-407a-8d82-15daa95b5ced/4710ae0a-ec4d-4e62-8fda-a8295c2f620f'.
Dec  4 05:41:42 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9a5f4ecd-03b6-407a-8d82-15daa95b5ced/.meta.tmp'
Dec  4 05:41:42 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9a5f4ecd-03b6-407a-8d82-15daa95b5ced/.meta.tmp' to config b'/volumes/_nogroup/9a5f4ecd-03b6-407a-8d82-15daa95b5ced/.meta'
Dec  4 05:41:42 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9a5f4ecd-03b6-407a-8d82-15daa95b5ced, vol_name:cephfs) < ""
Dec  4 05:41:42 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9a5f4ecd-03b6-407a-8d82-15daa95b5ced", "format": "json"}]: dispatch
Dec  4 05:41:42 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9a5f4ecd-03b6-407a-8d82-15daa95b5ced, vol_name:cephfs) < ""
Dec  4 05:41:42 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9a5f4ecd-03b6-407a-8d82-15daa95b5ced, vol_name:cephfs) < ""
Dec  4 05:41:42 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:41:42 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:41:42 np0005545273 nova_compute[244644]: 2025-12-04 10:41:42.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:41:42 np0005545273 nova_compute[244644]: 2025-12-04 10:41:42.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:41:42 np0005545273 nova_compute[244644]: 2025-12-04 10:41:42.367 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:41:42 np0005545273 nova_compute[244644]: 2025-12-04 10:41:42.368 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:41:42 np0005545273 nova_compute[244644]: 2025-12-04 10:41:42.368 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:41:42 np0005545273 nova_compute[244644]: 2025-12-04 10:41:42.368 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  4 05:41:42 np0005545273 nova_compute[244644]: 2025-12-04 10:41:42.369 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:41:42 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:41:42 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1279979488' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:41:42 np0005545273 nova_compute[244644]: 2025-12-04 10:41:42.934 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:41:43 np0005545273 nova_compute[244644]: 2025-12-04 10:41:43.085 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  4 05:41:43 np0005545273 nova_compute[244644]: 2025-12-04 10:41:43.087 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5113MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  4 05:41:43 np0005545273 nova_compute[244644]: 2025-12-04 10:41:43.087 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:41:43 np0005545273 nova_compute[244644]: 2025-12-04 10:41:43.087 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:41:43 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v945: 321 pgs: 321 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 39 KiB/s wr, 6 op/s
Dec  4 05:41:43 np0005545273 nova_compute[244644]: 2025-12-04 10:41:43.618 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  4 05:41:43 np0005545273 nova_compute[244644]: 2025-12-04 10:41:43.619 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  4 05:41:43 np0005545273 nova_compute[244644]: 2025-12-04 10:41:43.644 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:41:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:41:44 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3569300093' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:41:44 np0005545273 nova_compute[244644]: 2025-12-04 10:41:44.170 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:41:44 np0005545273 nova_compute[244644]: 2025-12-04 10:41:44.175 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  4 05:41:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:41:45 np0005545273 nova_compute[244644]: 2025-12-04 10:41:45.374 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  4 05:41:45 np0005545273 nova_compute[244644]: 2025-12-04 10:41:45.376 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  4 05:41:45 np0005545273 nova_compute[244644]: 2025-12-04 10:41:45.376 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.289s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:41:45 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v946: 321 pgs: 321 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 39 KiB/s wr, 5 op/s
Dec  4 05:41:46 np0005545273 nova_compute[244644]: 2025-12-04 10:41:46.376 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:41:46 np0005545273 nova_compute[244644]: 2025-12-04 10:41:46.377 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:41:46 np0005545273 nova_compute[244644]: 2025-12-04 10:41:46.396 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:41:46 np0005545273 nova_compute[244644]: 2025-12-04 10:41:46.397 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:41:46 np0005545273 nova_compute[244644]: 2025-12-04 10:41:46.397 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:41:46 np0005545273 nova_compute[244644]: 2025-12-04 10:41:46.397 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:41:46 np0005545273 nova_compute[244644]: 2025-12-04 10:41:46.397 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  4 05:41:47 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v947: 321 pgs: 321 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 39 KiB/s wr, 5 op/s
Dec  4 05:41:48 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9a5f4ecd-03b6-407a-8d82-15daa95b5ced", "format": "json"}]: dispatch
Dec  4 05:41:48 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9a5f4ecd-03b6-407a-8d82-15daa95b5ced, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:41:48 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9a5f4ecd-03b6-407a-8d82-15daa95b5ced, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:41:48 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:41:48.725+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9a5f4ecd-03b6-407a-8d82-15daa95b5ced' of type subvolume
Dec  4 05:41:48 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9a5f4ecd-03b6-407a-8d82-15daa95b5ced' of type subvolume
Dec  4 05:41:48 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9a5f4ecd-03b6-407a-8d82-15daa95b5ced", "force": true, "format": "json"}]: dispatch
Dec  4 05:41:48 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9a5f4ecd-03b6-407a-8d82-15daa95b5ced, vol_name:cephfs) < ""
Dec  4 05:41:48 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9a5f4ecd-03b6-407a-8d82-15daa95b5ced'' moved to trashcan
Dec  4 05:41:48 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:41:48 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9a5f4ecd-03b6-407a-8d82-15daa95b5ced, vol_name:cephfs) < ""
Dec  4 05:41:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:41:49 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v948: 321 pgs: 321 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 28 KiB/s wr, 4 op/s
Dec  4 05:41:51 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v949: 321 pgs: 321 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 23 KiB/s wr, 3 op/s
Dec  4 05:41:51 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c54a12b3-b92e-4a09-81b2-2bfc280d4eaa", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:41:51 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c54a12b3-b92e-4a09-81b2-2bfc280d4eaa, vol_name:cephfs) < ""
Dec  4 05:41:51 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/c54a12b3-b92e-4a09-81b2-2bfc280d4eaa/a99bfaa6-75dd-4a13-893b-da8b9b54dca0'.
Dec  4 05:41:51 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c54a12b3-b92e-4a09-81b2-2bfc280d4eaa/.meta.tmp'
Dec  4 05:41:51 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c54a12b3-b92e-4a09-81b2-2bfc280d4eaa/.meta.tmp' to config b'/volumes/_nogroup/c54a12b3-b92e-4a09-81b2-2bfc280d4eaa/.meta'
Dec  4 05:41:51 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c54a12b3-b92e-4a09-81b2-2bfc280d4eaa, vol_name:cephfs) < ""
Dec  4 05:41:51 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c54a12b3-b92e-4a09-81b2-2bfc280d4eaa", "format": "json"}]: dispatch
Dec  4 05:41:51 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c54a12b3-b92e-4a09-81b2-2bfc280d4eaa, vol_name:cephfs) < ""
Dec  4 05:41:51 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c54a12b3-b92e-4a09-81b2-2bfc280d4eaa, vol_name:cephfs) < ""
Dec  4 05:41:51 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:41:51 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:41:53 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bdae8876-925e-4534-9c67-ead7c1879e8c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:41:53 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bdae8876-925e-4534-9c67-ead7c1879e8c, vol_name:cephfs) < ""
Dec  4 05:41:53 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/bdae8876-925e-4534-9c67-ead7c1879e8c/c8254e14-3b4f-4a93-a1ae-bdb20560cbeb'.
Dec  4 05:41:53 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bdae8876-925e-4534-9c67-ead7c1879e8c/.meta.tmp'
Dec  4 05:41:53 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bdae8876-925e-4534-9c67-ead7c1879e8c/.meta.tmp' to config b'/volumes/_nogroup/bdae8876-925e-4534-9c67-ead7c1879e8c/.meta'
Dec  4 05:41:53 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bdae8876-925e-4534-9c67-ead7c1879e8c, vol_name:cephfs) < ""
Dec  4 05:41:53 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bdae8876-925e-4534-9c67-ead7c1879e8c", "format": "json"}]: dispatch
Dec  4 05:41:53 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bdae8876-925e-4534-9c67-ead7c1879e8c, vol_name:cephfs) < ""
Dec  4 05:41:53 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bdae8876-925e-4534-9c67-ead7c1879e8c, vol_name:cephfs) < ""
Dec  4 05:41:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:41:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:41:53 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v950: 321 pgs: 321 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 40 KiB/s wr, 5 op/s
Dec  4 05:41:53 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cda2cc19-4836-4171-8f02-990e4046f802", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:41:53 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cda2cc19-4836-4171-8f02-990e4046f802, vol_name:cephfs) < ""
Dec  4 05:41:53 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/cda2cc19-4836-4171-8f02-990e4046f802/6b0b7784-b07f-495c-ad5e-81986ac7be36'.
Dec  4 05:41:53 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cda2cc19-4836-4171-8f02-990e4046f802/.meta.tmp'
Dec  4 05:41:53 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cda2cc19-4836-4171-8f02-990e4046f802/.meta.tmp' to config b'/volumes/_nogroup/cda2cc19-4836-4171-8f02-990e4046f802/.meta'
Dec  4 05:41:53 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cda2cc19-4836-4171-8f02-990e4046f802, vol_name:cephfs) < ""
Dec  4 05:41:53 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cda2cc19-4836-4171-8f02-990e4046f802", "format": "json"}]: dispatch
Dec  4 05:41:53 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cda2cc19-4836-4171-8f02-990e4046f802, vol_name:cephfs) < ""
Dec  4 05:41:53 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cda2cc19-4836-4171-8f02-990e4046f802, vol_name:cephfs) < ""
Dec  4 05:41:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:41:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:41:53 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b3f91f1e-db38-4937-881a-6c033198bb16", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:41:53 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b3f91f1e-db38-4937-881a-6c033198bb16, vol_name:cephfs) < ""
Dec  4 05:41:54 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/b3f91f1e-db38-4937-881a-6c033198bb16/31f73d8a-d868-48e2-8b85-21117fdcc89e'.
Dec  4 05:41:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b3f91f1e-db38-4937-881a-6c033198bb16/.meta.tmp'
Dec  4 05:41:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b3f91f1e-db38-4937-881a-6c033198bb16/.meta.tmp' to config b'/volumes/_nogroup/b3f91f1e-db38-4937-881a-6c033198bb16/.meta'
Dec  4 05:41:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b3f91f1e-db38-4937-881a-6c033198bb16, vol_name:cephfs) < ""
Dec  4 05:41:54 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b3f91f1e-db38-4937-881a-6c033198bb16", "format": "json"}]: dispatch
Dec  4 05:41:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b3f91f1e-db38-4937-881a-6c033198bb16, vol_name:cephfs) < ""
Dec  4 05:41:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b3f91f1e-db38-4937-881a-6c033198bb16, vol_name:cephfs) < ""
Dec  4 05:41:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:41:54 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:41:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:41:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:41:54.907 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:41:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:41:54.908 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:41:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:41:54.908 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:41:55 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v951: 321 pgs: 321 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 20 KiB/s wr, 2 op/s
Dec  4 05:41:56 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c54a12b3-b92e-4a09-81b2-2bfc280d4eaa", "format": "json"}]: dispatch
Dec  4 05:41:56 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c54a12b3-b92e-4a09-81b2-2bfc280d4eaa, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:41:56 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c54a12b3-b92e-4a09-81b2-2bfc280d4eaa, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:41:56 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:41:56.351+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c54a12b3-b92e-4a09-81b2-2bfc280d4eaa' of type subvolume
Dec  4 05:41:56 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c54a12b3-b92e-4a09-81b2-2bfc280d4eaa' of type subvolume
Dec  4 05:41:56 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c54a12b3-b92e-4a09-81b2-2bfc280d4eaa", "force": true, "format": "json"}]: dispatch
Dec  4 05:41:56 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c54a12b3-b92e-4a09-81b2-2bfc280d4eaa, vol_name:cephfs) < ""
Dec  4 05:41:56 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c54a12b3-b92e-4a09-81b2-2bfc280d4eaa'' moved to trashcan
Dec  4 05:41:56 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:41:56 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c54a12b3-b92e-4a09-81b2-2bfc280d4eaa, vol_name:cephfs) < ""
Dec  4 05:41:57 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cda2cc19-4836-4171-8f02-990e4046f802", "format": "json"}]: dispatch
Dec  4 05:41:57 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:cda2cc19-4836-4171-8f02-990e4046f802, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:41:57 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:cda2cc19-4836-4171-8f02-990e4046f802, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:41:57 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:41:57.155+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cda2cc19-4836-4171-8f02-990e4046f802' of type subvolume
Dec  4 05:41:57 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cda2cc19-4836-4171-8f02-990e4046f802' of type subvolume
Dec  4 05:41:57 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cda2cc19-4836-4171-8f02-990e4046f802", "force": true, "format": "json"}]: dispatch
Dec  4 05:41:57 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cda2cc19-4836-4171-8f02-990e4046f802, vol_name:cephfs) < ""
Dec  4 05:41:57 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/cda2cc19-4836-4171-8f02-990e4046f802'' moved to trashcan
Dec  4 05:41:57 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:41:57 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cda2cc19-4836-4171-8f02-990e4046f802, vol_name:cephfs) < ""
Dec  4 05:41:57 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v952: 321 pgs: 321 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 20 KiB/s wr, 2 op/s
Dec  4 05:41:57 np0005545273 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec  4 05:41:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:41:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:41:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:41:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:41:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:41:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:41:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:41:59 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v953: 321 pgs: 321 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 49 KiB/s wr, 5 op/s
Dec  4 05:41:59 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b3f91f1e-db38-4937-881a-6c033198bb16", "format": "json"}]: dispatch
Dec  4 05:41:59 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b3f91f1e-db38-4937-881a-6c033198bb16, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:41:59 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b3f91f1e-db38-4937-881a-6c033198bb16, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:41:59 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:41:59.490+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b3f91f1e-db38-4937-881a-6c033198bb16' of type subvolume
Dec  4 05:41:59 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b3f91f1e-db38-4937-881a-6c033198bb16' of type subvolume
Dec  4 05:41:59 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b3f91f1e-db38-4937-881a-6c033198bb16", "force": true, "format": "json"}]: dispatch
Dec  4 05:41:59 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b3f91f1e-db38-4937-881a-6c033198bb16, vol_name:cephfs) < ""
Dec  4 05:41:59 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b3f91f1e-db38-4937-881a-6c033198bb16'' moved to trashcan
Dec  4 05:41:59 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:41:59 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b3f91f1e-db38-4937-881a-6c033198bb16, vol_name:cephfs) < ""
Dec  4 05:41:59 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "481aa727-f970-4ad9-94c6-ca9f61924fb8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:41:59 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:481aa727-f970-4ad9-94c6-ca9f61924fb8, vol_name:cephfs) < ""
Dec  4 05:41:59 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/481aa727-f970-4ad9-94c6-ca9f61924fb8/c8609094-ece1-462b-9a3d-54c307953629'.
Dec  4 05:41:59 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/481aa727-f970-4ad9-94c6-ca9f61924fb8/.meta.tmp'
Dec  4 05:41:59 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/481aa727-f970-4ad9-94c6-ca9f61924fb8/.meta.tmp' to config b'/volumes/_nogroup/481aa727-f970-4ad9-94c6-ca9f61924fb8/.meta'
Dec  4 05:41:59 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:481aa727-f970-4ad9-94c6-ca9f61924fb8, vol_name:cephfs) < ""
Dec  4 05:41:59 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "481aa727-f970-4ad9-94c6-ca9f61924fb8", "format": "json"}]: dispatch
Dec  4 05:41:59 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:481aa727-f970-4ad9-94c6-ca9f61924fb8, vol_name:cephfs) < ""
Dec  4 05:41:59 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:481aa727-f970-4ad9-94c6-ca9f61924fb8, vol_name:cephfs) < ""
Dec  4 05:41:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:41:59 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:42:00 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bdae8876-925e-4534-9c67-ead7c1879e8c", "format": "json"}]: dispatch
Dec  4 05:42:00 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:bdae8876-925e-4534-9c67-ead7c1879e8c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:42:00 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:bdae8876-925e-4534-9c67-ead7c1879e8c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:42:00 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:42:00.737+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bdae8876-925e-4534-9c67-ead7c1879e8c' of type subvolume
Dec  4 05:42:00 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bdae8876-925e-4534-9c67-ead7c1879e8c' of type subvolume
Dec  4 05:42:00 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bdae8876-925e-4534-9c67-ead7c1879e8c", "force": true, "format": "json"}]: dispatch
Dec  4 05:42:00 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bdae8876-925e-4534-9c67-ead7c1879e8c, vol_name:cephfs) < ""
Dec  4 05:42:00 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/bdae8876-925e-4534-9c67-ead7c1879e8c'' moved to trashcan
Dec  4 05:42:00 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:42:00 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bdae8876-925e-4534-9c67-ead7c1879e8c, vol_name:cephfs) < ""
Dec  4 05:42:01 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v954: 321 pgs: 321 active+clean; 46 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 48 KiB/s wr, 5 op/s
Dec  4 05:42:01 np0005545273 podman[251765]: 2025-12-04 10:42:01.950895522 +0000 UTC m=+0.064277606 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  4 05:42:03 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "481aa727-f970-4ad9-94c6-ca9f61924fb8", "snap_name": "5d3cb7a6-0d61-4ba5-bb06-6cd12e9e1f68", "format": "json"}]: dispatch
Dec  4 05:42:03 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5d3cb7a6-0d61-4ba5-bb06-6cd12e9e1f68, sub_name:481aa727-f970-4ad9-94c6-ca9f61924fb8, vol_name:cephfs) < ""
Dec  4 05:42:03 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v955: 321 pgs: 321 active+clean; 46 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 72 KiB/s wr, 9 op/s
Dec  4 05:42:03 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5d3cb7a6-0d61-4ba5-bb06-6cd12e9e1f68, sub_name:481aa727-f970-4ad9-94c6-ca9f61924fb8, vol_name:cephfs) < ""
Dec  4 05:42:04 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3d22f483-2196-4c24-a6e8-b6086bc6989e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:42:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3d22f483-2196-4c24-a6e8-b6086bc6989e, vol_name:cephfs) < ""
Dec  4 05:42:04 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/3d22f483-2196-4c24-a6e8-b6086bc6989e/1fa680d2-c7e3-4b92-8668-e87a35555293'.
Dec  4 05:42:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3d22f483-2196-4c24-a6e8-b6086bc6989e/.meta.tmp'
Dec  4 05:42:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3d22f483-2196-4c24-a6e8-b6086bc6989e/.meta.tmp' to config b'/volumes/_nogroup/3d22f483-2196-4c24-a6e8-b6086bc6989e/.meta'
Dec  4 05:42:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3d22f483-2196-4c24-a6e8-b6086bc6989e, vol_name:cephfs) < ""
Dec  4 05:42:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:42:04 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3d22f483-2196-4c24-a6e8-b6086bc6989e", "format": "json"}]: dispatch
Dec  4 05:42:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3d22f483-2196-4c24-a6e8-b6086bc6989e, vol_name:cephfs) < ""
Dec  4 05:42:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3d22f483-2196-4c24-a6e8-b6086bc6989e, vol_name:cephfs) < ""
Dec  4 05:42:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:42:04 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:42:04 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e221725b-e6e8-4c35-9638-fa0fd11665ad", "format": "json"}]: dispatch
Dec  4 05:42:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e221725b-e6e8-4c35-9638-fa0fd11665ad, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:42:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e221725b-e6e8-4c35-9638-fa0fd11665ad, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:42:04 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:42:04.812+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e221725b-e6e8-4c35-9638-fa0fd11665ad' of type subvolume
Dec  4 05:42:04 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e221725b-e6e8-4c35-9638-fa0fd11665ad' of type subvolume
Dec  4 05:42:04 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e221725b-e6e8-4c35-9638-fa0fd11665ad", "force": true, "format": "json"}]: dispatch
Dec  4 05:42:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e221725b-e6e8-4c35-9638-fa0fd11665ad, vol_name:cephfs) < ""
Dec  4 05:42:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e221725b-e6e8-4c35-9638-fa0fd11665ad'' moved to trashcan
Dec  4 05:42:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:42:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e221725b-e6e8-4c35-9638-fa0fd11665ad, vol_name:cephfs) < ""
Dec  4 05:42:05 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v956: 321 pgs: 321 active+clean; 46 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 53 KiB/s wr, 6 op/s
Dec  4 05:42:07 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v957: 321 pgs: 321 active+clean; 46 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 53 KiB/s wr, 7 op/s
Dec  4 05:42:07 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "481aa727-f970-4ad9-94c6-ca9f61924fb8", "snap_name": "5d3cb7a6-0d61-4ba5-bb06-6cd12e9e1f68_67473b26-a211-4250-90a1-ca773f3435a0", "force": true, "format": "json"}]: dispatch
Dec  4 05:42:07 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5d3cb7a6-0d61-4ba5-bb06-6cd12e9e1f68_67473b26-a211-4250-90a1-ca773f3435a0, sub_name:481aa727-f970-4ad9-94c6-ca9f61924fb8, vol_name:cephfs) < ""
Dec  4 05:42:07 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/481aa727-f970-4ad9-94c6-ca9f61924fb8/.meta.tmp'
Dec  4 05:42:07 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/481aa727-f970-4ad9-94c6-ca9f61924fb8/.meta.tmp' to config b'/volumes/_nogroup/481aa727-f970-4ad9-94c6-ca9f61924fb8/.meta'
Dec  4 05:42:07 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5d3cb7a6-0d61-4ba5-bb06-6cd12e9e1f68_67473b26-a211-4250-90a1-ca773f3435a0, sub_name:481aa727-f970-4ad9-94c6-ca9f61924fb8, vol_name:cephfs) < ""
Dec  4 05:42:07 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "481aa727-f970-4ad9-94c6-ca9f61924fb8", "snap_name": "5d3cb7a6-0d61-4ba5-bb06-6cd12e9e1f68", "force": true, "format": "json"}]: dispatch
Dec  4 05:42:07 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5d3cb7a6-0d61-4ba5-bb06-6cd12e9e1f68, sub_name:481aa727-f970-4ad9-94c6-ca9f61924fb8, vol_name:cephfs) < ""
Dec  4 05:42:07 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/481aa727-f970-4ad9-94c6-ca9f61924fb8/.meta.tmp'
Dec  4 05:42:07 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/481aa727-f970-4ad9-94c6-ca9f61924fb8/.meta.tmp' to config b'/volumes/_nogroup/481aa727-f970-4ad9-94c6-ca9f61924fb8/.meta'
Dec  4 05:42:07 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5d3cb7a6-0d61-4ba5-bb06-6cd12e9e1f68, sub_name:481aa727-f970-4ad9-94c6-ca9f61924fb8, vol_name:cephfs) < ""
Dec  4 05:42:08 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3d22f483-2196-4c24-a6e8-b6086bc6989e", "format": "json"}]: dispatch
Dec  4 05:42:08 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3d22f483-2196-4c24-a6e8-b6086bc6989e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:42:08 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3d22f483-2196-4c24-a6e8-b6086bc6989e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:42:08 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:42:08.217+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3d22f483-2196-4c24-a6e8-b6086bc6989e' of type subvolume
Dec  4 05:42:08 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3d22f483-2196-4c24-a6e8-b6086bc6989e' of type subvolume
Dec  4 05:42:08 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3d22f483-2196-4c24-a6e8-b6086bc6989e", "force": true, "format": "json"}]: dispatch
Dec  4 05:42:08 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3d22f483-2196-4c24-a6e8-b6086bc6989e, vol_name:cephfs) < ""
Dec  4 05:42:08 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3d22f483-2196-4c24-a6e8-b6086bc6989e'' moved to trashcan
Dec  4 05:42:08 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:42:08 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3d22f483-2196-4c24-a6e8-b6086bc6989e, vol_name:cephfs) < ""
Dec  4 05:42:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:42:09 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v958: 321 pgs: 321 active+clean; 47 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 73 KiB/s wr, 10 op/s
Dec  4 05:42:09 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:42:09.544 156095 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'aa:78:67', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:d2:c7:24:ee:78'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  4 05:42:09 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:42:09.545 156095 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  4 05:42:10 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:42:10.548 156095 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=565580d5-3422-4e11-b563-3f1a3db67238, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  4 05:42:10 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Dec  4 05:42:10 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Dec  4 05:42:10 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Dec  4 05:42:11 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "481aa727-f970-4ad9-94c6-ca9f61924fb8", "format": "json"}]: dispatch
Dec  4 05:42:11 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:481aa727-f970-4ad9-94c6-ca9f61924fb8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:42:11 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:481aa727-f970-4ad9-94c6-ca9f61924fb8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:42:11 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:42:11.278+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '481aa727-f970-4ad9-94c6-ca9f61924fb8' of type subvolume
Dec  4 05:42:11 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '481aa727-f970-4ad9-94c6-ca9f61924fb8' of type subvolume
Dec  4 05:42:11 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "481aa727-f970-4ad9-94c6-ca9f61924fb8", "force": true, "format": "json"}]: dispatch
Dec  4 05:42:11 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:481aa727-f970-4ad9-94c6-ca9f61924fb8, vol_name:cephfs) < ""
Dec  4 05:42:11 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/481aa727-f970-4ad9-94c6-ca9f61924fb8'' moved to trashcan
Dec  4 05:42:11 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:42:11 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:481aa727-f970-4ad9-94c6-ca9f61924fb8, vol_name:cephfs) < ""
Dec  4 05:42:11 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v960: 321 pgs: 321 active+clean; 47 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 53 KiB/s wr, 8 op/s
Dec  4 05:42:11 np0005545273 podman[251787]: 2025-12-04 10:42:11.941988401 +0000 UTC m=+0.049525679 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  4 05:42:11 np0005545273 podman[251786]: 2025-12-04 10:42:11.972361485 +0000 UTC m=+0.083195685 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  4 05:42:13 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v961: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 51 KiB/s wr, 8 op/s
Dec  4 05:42:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:42:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Dec  4 05:42:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Dec  4 05:42:14 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Dec  4 05:42:15 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v963: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 63 KiB/s wr, 10 op/s
Dec  4 05:42:16 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:42:16 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec  4 05:42:16 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8'.
Dec  4 05:42:16 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/.meta.tmp'
Dec  4 05:42:16 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/.meta.tmp' to config b'/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/.meta'
Dec  4 05:42:16 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec  4 05:42:16 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "format": "json"}]: dispatch
Dec  4 05:42:16 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec  4 05:42:16 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec  4 05:42:16 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:42:16 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:42:17 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v964: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 33 KiB/s wr, 5 op/s
Dec  4 05:42:19 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:42:19 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v965: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 702 B/s rd, 42 KiB/s wr, 7 op/s
Dec  4 05:42:21 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "58ec2fca-4cd4-4393-9127-d135ebc9b908", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:42:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:58ec2fca-4cd4-4393-9127-d135ebc9b908, vol_name:cephfs) < ""
Dec  4 05:42:21 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/58ec2fca-4cd4-4393-9127-d135ebc9b908/3339395c-7998-4a2a-83ee-2fce006949f8'.
Dec  4 05:42:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/58ec2fca-4cd4-4393-9127-d135ebc9b908/.meta.tmp'
Dec  4 05:42:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/58ec2fca-4cd4-4393-9127-d135ebc9b908/.meta.tmp' to config b'/volumes/_nogroup/58ec2fca-4cd4-4393-9127-d135ebc9b908/.meta'
Dec  4 05:42:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:58ec2fca-4cd4-4393-9127-d135ebc9b908, vol_name:cephfs) < ""
Dec  4 05:42:21 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "58ec2fca-4cd4-4393-9127-d135ebc9b908", "format": "json"}]: dispatch
Dec  4 05:42:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:58ec2fca-4cd4-4393-9127-d135ebc9b908, vol_name:cephfs) < ""
Dec  4 05:42:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:58ec2fca-4cd4-4393-9127-d135ebc9b908, vol_name:cephfs) < ""
Dec  4 05:42:21 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:42:21 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:42:21 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v966: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 36 KiB/s wr, 6 op/s
Dec  4 05:42:23 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v967: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 22 KiB/s wr, 3 op/s
Dec  4 05:42:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:42:24 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "58ec2fca-4cd4-4393-9127-d135ebc9b908", "auth_id": "tempest-cephx-id-195673542", "tenant_id": "094a9e5adfae45769d099eaf0d4f598c", "access_level": "rw", "format": "json"}]: dispatch
Dec  4 05:42:24 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume authorize, sub_name:58ec2fca-4cd4-4393-9127-d135ebc9b908, tenant_id:094a9e5adfae45769d099eaf0d4f598c, vol_name:cephfs) < ""
Dec  4 05:42:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} v 0)
Dec  4 05:42:24 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec  4 05:42:24 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID tempest-cephx-id-195673542 with tenant 094a9e5adfae45769d099eaf0d4f598c
Dec  4 05:42:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/58ec2fca-4cd4-4393-9127-d135ebc9b908/3339395c-7998-4a2a-83ee-2fce006949f8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_58ec2fca-4cd4-4393-9127-d135ebc9b908", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:42:24 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/58ec2fca-4cd4-4393-9127-d135ebc9b908/3339395c-7998-4a2a-83ee-2fce006949f8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_58ec2fca-4cd4-4393-9127-d135ebc9b908", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:42:24 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/58ec2fca-4cd4-4393-9127-d135ebc9b908/3339395c-7998-4a2a-83ee-2fce006949f8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_58ec2fca-4cd4-4393-9127-d135ebc9b908", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:42:24 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume authorize, sub_name:58ec2fca-4cd4-4393-9127-d135ebc9b908, tenant_id:094a9e5adfae45769d099eaf0d4f598c, vol_name:cephfs) < ""
Dec  4 05:42:24 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec  4 05:42:24 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/58ec2fca-4cd4-4393-9127-d135ebc9b908/3339395c-7998-4a2a-83ee-2fce006949f8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_58ec2fca-4cd4-4393-9127-d135ebc9b908", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:42:24 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/58ec2fca-4cd4-4393-9127-d135ebc9b908/3339395c-7998-4a2a-83ee-2fce006949f8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_58ec2fca-4cd4-4393-9127-d135ebc9b908", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:42:25 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v968: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 184 B/s rd, 20 KiB/s wr, 3 op/s
Dec  4 05:42:25 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:42:25 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:42:25 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:42:25 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:42:25 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:42:25 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:42:25 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:42:25 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:42:25 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:42:25 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:42:25 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:42:25 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:42:26 np0005545273 podman[251978]: 2025-12-04 10:42:26.290735847 +0000 UTC m=+0.042996727 container create 715acb8d177353447b754c944288f65eaf72dfb2bc6c302658ae08010a07586f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_gates, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  4 05:42:26 np0005545273 systemd[1]: Started libpod-conmon-715acb8d177353447b754c944288f65eaf72dfb2bc6c302658ae08010a07586f.scope.
Dec  4 05:42:26 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:42:26 np0005545273 podman[251978]: 2025-12-04 10:42:26.271056099 +0000 UTC m=+0.023316979 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:42:26 np0005545273 podman[251978]: 2025-12-04 10:42:26.372149338 +0000 UTC m=+0.124410218 container init 715acb8d177353447b754c944288f65eaf72dfb2bc6c302658ae08010a07586f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_gates, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec  4 05:42:26 np0005545273 podman[251978]: 2025-12-04 10:42:26.379567362 +0000 UTC m=+0.131828222 container start 715acb8d177353447b754c944288f65eaf72dfb2bc6c302658ae08010a07586f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_gates, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:42:26 np0005545273 podman[251978]: 2025-12-04 10:42:26.383264844 +0000 UTC m=+0.135525834 container attach 715acb8d177353447b754c944288f65eaf72dfb2bc6c302658ae08010a07586f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_gates, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec  4 05:42:26 np0005545273 wizardly_gates[251994]: 167 167
Dec  4 05:42:26 np0005545273 systemd[1]: libpod-715acb8d177353447b754c944288f65eaf72dfb2bc6c302658ae08010a07586f.scope: Deactivated successfully.
Dec  4 05:42:26 np0005545273 podman[251978]: 2025-12-04 10:42:26.386181556 +0000 UTC m=+0.138442416 container died 715acb8d177353447b754c944288f65eaf72dfb2bc6c302658ae08010a07586f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_gates, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec  4 05:42:26 np0005545273 systemd[1]: var-lib-containers-storage-overlay-227262a48362c5240ed6104526cb846d7bc14489a5dbd5f9d6a34e9bdbf0bd02-merged.mount: Deactivated successfully.
Dec  4 05:42:26 np0005545273 podman[251978]: 2025-12-04 10:42:26.426854635 +0000 UTC m=+0.179115485 container remove 715acb8d177353447b754c944288f65eaf72dfb2bc6c302658ae08010a07586f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_gates, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:42:26 np0005545273 systemd[1]: libpod-conmon-715acb8d177353447b754c944288f65eaf72dfb2bc6c302658ae08010a07586f.scope: Deactivated successfully.
Dec  4 05:42:26 np0005545273 podman[252018]: 2025-12-04 10:42:26.582688512 +0000 UTC m=+0.042112046 container create 9120d7a54d5c3ccc07a9e14e07f0423afe7ec37921b73903fcb48cad02ff2bf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_noyce, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:42:26 np0005545273 systemd[1]: Started libpod-conmon-9120d7a54d5c3ccc07a9e14e07f0423afe7ec37921b73903fcb48cad02ff2bf3.scope.
Dec  4 05:42:26 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:42:26 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c97babc61c884de6951b9c66843984c00f6a6814018f1f789423d17d5774d39/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:42:26 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c97babc61c884de6951b9c66843984c00f6a6814018f1f789423d17d5774d39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:42:26 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c97babc61c884de6951b9c66843984c00f6a6814018f1f789423d17d5774d39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:42:26 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c97babc61c884de6951b9c66843984c00f6a6814018f1f789423d17d5774d39/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:42:26 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c97babc61c884de6951b9c66843984c00f6a6814018f1f789423d17d5774d39/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:42:26 np0005545273 podman[252018]: 2025-12-04 10:42:26.564798798 +0000 UTC m=+0.024222352 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:42:26 np0005545273 podman[252018]: 2025-12-04 10:42:26.668531711 +0000 UTC m=+0.127955275 container init 9120d7a54d5c3ccc07a9e14e07f0423afe7ec37921b73903fcb48cad02ff2bf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_noyce, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  4 05:42:26 np0005545273 podman[252018]: 2025-12-04 10:42:26.675220918 +0000 UTC m=+0.134644452 container start 9120d7a54d5c3ccc07a9e14e07f0423afe7ec37921b73903fcb48cad02ff2bf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_noyce, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  4 05:42:26 np0005545273 podman[252018]: 2025-12-04 10:42:26.678280703 +0000 UTC m=+0.137704237 container attach 9120d7a54d5c3ccc07a9e14e07f0423afe7ec37921b73903fcb48cad02ff2bf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_noyce, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:42:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:42:26
Dec  4 05:42:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:42:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:42:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', '.mgr', 'volumes', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'default.rgw.log', 'backups']
Dec  4 05:42:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:42:26 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:42:26 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:42:26 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:42:27 np0005545273 friendly_noyce[252034]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:42:27 np0005545273 friendly_noyce[252034]: --> All data devices are unavailable
Dec  4 05:42:27 np0005545273 systemd[1]: libpod-9120d7a54d5c3ccc07a9e14e07f0423afe7ec37921b73903fcb48cad02ff2bf3.scope: Deactivated successfully.
Dec  4 05:42:27 np0005545273 podman[252018]: 2025-12-04 10:42:27.161707108 +0000 UTC m=+0.621130642 container died 9120d7a54d5c3ccc07a9e14e07f0423afe7ec37921b73903fcb48cad02ff2bf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_noyce, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec  4 05:42:27 np0005545273 systemd[1]: var-lib-containers-storage-overlay-2c97babc61c884de6951b9c66843984c00f6a6814018f1f789423d17d5774d39-merged.mount: Deactivated successfully.
Dec  4 05:42:27 np0005545273 podman[252018]: 2025-12-04 10:42:27.201608459 +0000 UTC m=+0.661031993 container remove 9120d7a54d5c3ccc07a9e14e07f0423afe7ec37921b73903fcb48cad02ff2bf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_noyce, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:42:27 np0005545273 systemd[1]: libpod-conmon-9120d7a54d5c3ccc07a9e14e07f0423afe7ec37921b73903fcb48cad02ff2bf3.scope: Deactivated successfully.
Dec  4 05:42:27 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v969: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 19 KiB/s wr, 3 op/s
Dec  4 05:42:27 np0005545273 podman[252130]: 2025-12-04 10:42:27.660969657 +0000 UTC m=+0.041089240 container create d949ae877e4350fab03ed2f237ab1d451b5c5d60dc70c6dad6915647c6b705cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_stonebraker, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:42:27 np0005545273 systemd[1]: Started libpod-conmon-d949ae877e4350fab03ed2f237ab1d451b5c5d60dc70c6dad6915647c6b705cd.scope.
Dec  4 05:42:27 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:42:27 np0005545273 podman[252130]: 2025-12-04 10:42:27.723389846 +0000 UTC m=+0.103509439 container init d949ae877e4350fab03ed2f237ab1d451b5c5d60dc70c6dad6915647c6b705cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_stonebraker, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec  4 05:42:27 np0005545273 podman[252130]: 2025-12-04 10:42:27.73001234 +0000 UTC m=+0.110131923 container start d949ae877e4350fab03ed2f237ab1d451b5c5d60dc70c6dad6915647c6b705cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_stonebraker, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:42:27 np0005545273 agitated_stonebraker[252147]: 167 167
Dec  4 05:42:27 np0005545273 podman[252130]: 2025-12-04 10:42:27.734214134 +0000 UTC m=+0.114333737 container attach d949ae877e4350fab03ed2f237ab1d451b5c5d60dc70c6dad6915647c6b705cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec  4 05:42:27 np0005545273 systemd[1]: libpod-d949ae877e4350fab03ed2f237ab1d451b5c5d60dc70c6dad6915647c6b705cd.scope: Deactivated successfully.
Dec  4 05:42:27 np0005545273 podman[252130]: 2025-12-04 10:42:27.735367223 +0000 UTC m=+0.115486836 container died d949ae877e4350fab03ed2f237ab1d451b5c5d60dc70c6dad6915647c6b705cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_stonebraker, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:42:27 np0005545273 podman[252130]: 2025-12-04 10:42:27.642663613 +0000 UTC m=+0.022783216 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:42:27 np0005545273 systemd[1]: var-lib-containers-storage-overlay-0557a3c135958f1e0d02f228ba121d81f2df5ccf0080fb2ecf5bb932a25b6712-merged.mount: Deactivated successfully.
Dec  4 05:42:27 np0005545273 podman[252130]: 2025-12-04 10:42:27.775670563 +0000 UTC m=+0.155790156 container remove d949ae877e4350fab03ed2f237ab1d451b5c5d60dc70c6dad6915647c6b705cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_stonebraker, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:42:27 np0005545273 systemd[1]: libpod-conmon-d949ae877e4350fab03ed2f237ab1d451b5c5d60dc70c6dad6915647c6b705cd.scope: Deactivated successfully.
Dec  4 05:42:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:42:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:42:27 np0005545273 podman[252170]: 2025-12-04 10:42:27.959640868 +0000 UTC m=+0.060448361 container create e3b888445fd677278638fb10b2e0516eeccbb41349db90f795998182ac836d7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:42:28 np0005545273 systemd[1]: Started libpod-conmon-e3b888445fd677278638fb10b2e0516eeccbb41349db90f795998182ac836d7d.scope.
Dec  4 05:42:28 np0005545273 podman[252170]: 2025-12-04 10:42:27.941334913 +0000 UTC m=+0.042142426 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:42:28 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:42:28 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca8679ac8cb0b3943eb2c1666e720f4edc781593b3ad41918a0a45ceeeddb0d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:42:28 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca8679ac8cb0b3943eb2c1666e720f4edc781593b3ad41918a0a45ceeeddb0d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:42:28 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca8679ac8cb0b3943eb2c1666e720f4edc781593b3ad41918a0a45ceeeddb0d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:42:28 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca8679ac8cb0b3943eb2c1666e720f4edc781593b3ad41918a0a45ceeeddb0d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:42:28 np0005545273 podman[252170]: 2025-12-04 10:42:28.058833439 +0000 UTC m=+0.159640962 container init e3b888445fd677278638fb10b2e0516eeccbb41349db90f795998182ac836d7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_germain, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Dec  4 05:42:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:42:28 np0005545273 podman[252170]: 2025-12-04 10:42:28.065238128 +0000 UTC m=+0.166045621 container start e3b888445fd677278638fb10b2e0516eeccbb41349db90f795998182ac836d7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec  4 05:42:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:42:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:42:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:42:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:42:28 np0005545273 podman[252170]: 2025-12-04 10:42:28.068758775 +0000 UTC m=+0.169566268 container attach e3b888445fd677278638fb10b2e0516eeccbb41349db90f795998182ac836d7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_germain, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec  4 05:42:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:42:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:42:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:42:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:42:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:42:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:42:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:42:28 np0005545273 silly_germain[252184]: {
Dec  4 05:42:28 np0005545273 silly_germain[252184]:    "0": [
Dec  4 05:42:28 np0005545273 silly_germain[252184]:        {
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            "devices": [
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "/dev/loop3"
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            ],
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            "lv_name": "ceph_lv0",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            "lv_size": "21470642176",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            "name": "ceph_lv0",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            "tags": {
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.cluster_name": "ceph",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.crush_device_class": "",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.encrypted": "0",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.objectstore": "bluestore",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.osd_id": "0",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.type": "block",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.vdo": "0",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.with_tpm": "0"
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            },
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            "type": "block",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            "vg_name": "ceph_vg0"
Dec  4 05:42:28 np0005545273 silly_germain[252184]:        }
Dec  4 05:42:28 np0005545273 silly_germain[252184]:    ],
Dec  4 05:42:28 np0005545273 silly_germain[252184]:    "1": [
Dec  4 05:42:28 np0005545273 silly_germain[252184]:        {
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            "devices": [
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "/dev/loop4"
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            ],
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            "lv_name": "ceph_lv1",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            "lv_size": "21470642176",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            "name": "ceph_lv1",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            "tags": {
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.cluster_name": "ceph",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.crush_device_class": "",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.encrypted": "0",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.objectstore": "bluestore",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.osd_id": "1",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.type": "block",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.vdo": "0",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.with_tpm": "0"
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            },
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            "type": "block",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            "vg_name": "ceph_vg1"
Dec  4 05:42:28 np0005545273 silly_germain[252184]:        }
Dec  4 05:42:28 np0005545273 silly_germain[252184]:    ],
Dec  4 05:42:28 np0005545273 silly_germain[252184]:    "2": [
Dec  4 05:42:28 np0005545273 silly_germain[252184]:        {
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            "devices": [
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "/dev/loop5"
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            ],
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            "lv_name": "ceph_lv2",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            "lv_size": "21470642176",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            "name": "ceph_lv2",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            "tags": {
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.cluster_name": "ceph",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.crush_device_class": "",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.encrypted": "0",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.objectstore": "bluestore",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.osd_id": "2",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.type": "block",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.vdo": "0",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:                "ceph.with_tpm": "0"
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            },
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            "type": "block",
Dec  4 05:42:28 np0005545273 silly_germain[252184]:            "vg_name": "ceph_vg2"
Dec  4 05:42:28 np0005545273 silly_germain[252184]:        }
Dec  4 05:42:28 np0005545273 silly_germain[252184]:    ]
Dec  4 05:42:28 np0005545273 silly_germain[252184]: }
Dec  4 05:42:28 np0005545273 systemd[1]: libpod-e3b888445fd677278638fb10b2e0516eeccbb41349db90f795998182ac836d7d.scope: Deactivated successfully.
Dec  4 05:42:28 np0005545273 podman[252170]: 2025-12-04 10:42:28.351924932 +0000 UTC m=+0.452732425 container died e3b888445fd677278638fb10b2e0516eeccbb41349db90f795998182ac836d7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_germain, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  4 05:42:28 np0005545273 systemd[1]: var-lib-containers-storage-overlay-ca8679ac8cb0b3943eb2c1666e720f4edc781593b3ad41918a0a45ceeeddb0d6-merged.mount: Deactivated successfully.
Dec  4 05:42:28 np0005545273 podman[252170]: 2025-12-04 10:42:28.388589352 +0000 UTC m=+0.489396845 container remove e3b888445fd677278638fb10b2e0516eeccbb41349db90f795998182ac836d7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_germain, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True)
Dec  4 05:42:28 np0005545273 systemd[1]: libpod-conmon-e3b888445fd677278638fb10b2e0516eeccbb41349db90f795998182ac836d7d.scope: Deactivated successfully.
Dec  4 05:42:28 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "58ec2fca-4cd4-4393-9127-d135ebc9b908", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec  4 05:42:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume deauthorize, sub_name:58ec2fca-4cd4-4393-9127-d135ebc9b908, vol_name:cephfs) < ""
Dec  4 05:42:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} v 0)
Dec  4 05:42:28 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec  4 05:42:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} v 0)
Dec  4 05:42:28 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} : dispatch
Dec  4 05:42:28 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"}]': finished
Dec  4 05:42:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume deauthorize, sub_name:58ec2fca-4cd4-4393-9127-d135ebc9b908, vol_name:cephfs) < ""
Dec  4 05:42:28 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "58ec2fca-4cd4-4393-9127-d135ebc9b908", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec  4 05:42:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume evict, sub_name:58ec2fca-4cd4-4393-9127-d135ebc9b908, vol_name:cephfs) < ""
Dec  4 05:42:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-195673542, client_metadata.root=/volumes/_nogroup/58ec2fca-4cd4-4393-9127-d135ebc9b908/3339395c-7998-4a2a-83ee-2fce006949f8
Dec  4 05:42:28 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=tempest-cephx-id-195673542,client_metadata.root=/volumes/_nogroup/58ec2fca-4cd4-4393-9127-d135ebc9b908/3339395c-7998-4a2a-83ee-2fce006949f8],prefix=session evict} (starting...)
Dec  4 05:42:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:42:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume evict, sub_name:58ec2fca-4cd4-4393-9127-d135ebc9b908, vol_name:cephfs) < ""
Dec  4 05:42:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:42:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:42:28 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "58ec2fca-4cd4-4393-9127-d135ebc9b908", "format": "json"}]: dispatch
Dec  4 05:42:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:58ec2fca-4cd4-4393-9127-d135ebc9b908, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:42:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:58ec2fca-4cd4-4393-9127-d135ebc9b908, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:42:28 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:42:28.588+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '58ec2fca-4cd4-4393-9127-d135ebc9b908' of type subvolume
Dec  4 05:42:28 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '58ec2fca-4cd4-4393-9127-d135ebc9b908' of type subvolume
Dec  4 05:42:28 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "58ec2fca-4cd4-4393-9127-d135ebc9b908", "force": true, "format": "json"}]: dispatch
Dec  4 05:42:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:58ec2fca-4cd4-4393-9127-d135ebc9b908, vol_name:cephfs) < ""
Dec  4 05:42:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/58ec2fca-4cd4-4393-9127-d135ebc9b908'' moved to trashcan
Dec  4 05:42:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:42:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:58ec2fca-4cd4-4393-9127-d135ebc9b908, vol_name:cephfs) < ""
Dec  4 05:42:28 np0005545273 podman[252268]: 2025-12-04 10:42:28.846609827 +0000 UTC m=+0.038840975 container create 563be1ec6af2edf7d2cff598e0af4d5f485dbea1882ff3e0f81cc98e93e7b8b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  4 05:42:28 np0005545273 systemd[1]: Started libpod-conmon-563be1ec6af2edf7d2cff598e0af4d5f485dbea1882ff3e0f81cc98e93e7b8b1.scope.
Dec  4 05:42:28 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:42:28 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec  4 05:42:28 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} : dispatch
Dec  4 05:42:28 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"}]': finished
Dec  4 05:42:28 np0005545273 podman[252268]: 2025-12-04 10:42:28.908933532 +0000 UTC m=+0.101164720 container init 563be1ec6af2edf7d2cff598e0af4d5f485dbea1882ff3e0f81cc98e93e7b8b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_moore, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:42:28 np0005545273 podman[252268]: 2025-12-04 10:42:28.914751778 +0000 UTC m=+0.106982926 container start 563be1ec6af2edf7d2cff598e0af4d5f485dbea1882ff3e0f81cc98e93e7b8b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_moore, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:42:28 np0005545273 podman[252268]: 2025-12-04 10:42:28.918201273 +0000 UTC m=+0.110432471 container attach 563be1ec6af2edf7d2cff598e0af4d5f485dbea1882ff3e0f81cc98e93e7b8b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:42:28 np0005545273 systemd[1]: libpod-563be1ec6af2edf7d2cff598e0af4d5f485dbea1882ff3e0f81cc98e93e7b8b1.scope: Deactivated successfully.
Dec  4 05:42:28 np0005545273 peaceful_moore[252284]: 167 167
Dec  4 05:42:28 np0005545273 podman[252268]: 2025-12-04 10:42:28.920382447 +0000 UTC m=+0.112613595 container died 563be1ec6af2edf7d2cff598e0af4d5f485dbea1882ff3e0f81cc98e93e7b8b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_moore, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:42:28 np0005545273 conmon[252284]: conmon 563be1ec6af2edf7d2cf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-563be1ec6af2edf7d2cff598e0af4d5f485dbea1882ff3e0f81cc98e93e7b8b1.scope/container/memory.events
Dec  4 05:42:28 np0005545273 podman[252268]: 2025-12-04 10:42:28.830131357 +0000 UTC m=+0.022362525 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:42:28 np0005545273 systemd[1]: var-lib-containers-storage-overlay-d6a04bdf779a6a4d7be7782df25b13490f608b1cc93dd4b38cce9c31596f9479-merged.mount: Deactivated successfully.
Dec  4 05:42:28 np0005545273 podman[252268]: 2025-12-04 10:42:28.956427891 +0000 UTC m=+0.148659059 container remove 563be1ec6af2edf7d2cff598e0af4d5f485dbea1882ff3e0f81cc98e93e7b8b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:42:28 np0005545273 systemd[1]: libpod-conmon-563be1ec6af2edf7d2cff598e0af4d5f485dbea1882ff3e0f81cc98e93e7b8b1.scope: Deactivated successfully.
Dec  4 05:42:29 np0005545273 podman[252308]: 2025-12-04 10:42:29.106550637 +0000 UTC m=+0.042577648 container create 8ac1ce6bd668fa762db7e39af910ff39df83611f830c2156cc2eeedc0b1ba678 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:42:29 np0005545273 systemd[1]: Started libpod-conmon-8ac1ce6bd668fa762db7e39af910ff39df83611f830c2156cc2eeedc0b1ba678.scope.
Dec  4 05:42:29 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:42:29 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d3fe7ba6621176d9aa66051504346b9c8d014c5728c2300d1990f7048dd8383/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:42:29 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d3fe7ba6621176d9aa66051504346b9c8d014c5728c2300d1990f7048dd8383/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:42:29 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d3fe7ba6621176d9aa66051504346b9c8d014c5728c2300d1990f7048dd8383/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:42:29 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d3fe7ba6621176d9aa66051504346b9c8d014c5728c2300d1990f7048dd8383/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:42:29 np0005545273 podman[252308]: 2025-12-04 10:42:29.085426342 +0000 UTC m=+0.021453413 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:42:29 np0005545273 podman[252308]: 2025-12-04 10:42:29.184666414 +0000 UTC m=+0.120693455 container init 8ac1ce6bd668fa762db7e39af910ff39df83611f830c2156cc2eeedc0b1ba678 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_ganguly, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec  4 05:42:29 np0005545273 podman[252308]: 2025-12-04 10:42:29.191908954 +0000 UTC m=+0.127935965 container start 8ac1ce6bd668fa762db7e39af910ff39df83611f830c2156cc2eeedc0b1ba678 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_ganguly, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  4 05:42:29 np0005545273 podman[252308]: 2025-12-04 10:42:29.197042522 +0000 UTC m=+0.133069563 container attach 8ac1ce6bd668fa762db7e39af910ff39df83611f830c2156cc2eeedc0b1ba678 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:42:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:42:29 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v970: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 32 KiB/s wr, 4 op/s
Dec  4 05:42:29 np0005545273 lvm[252402]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:42:29 np0005545273 lvm[252403]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:42:29 np0005545273 lvm[252403]: VG ceph_vg1 finished
Dec  4 05:42:29 np0005545273 lvm[252402]: VG ceph_vg0 finished
Dec  4 05:42:29 np0005545273 lvm[252405]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:42:29 np0005545273 lvm[252405]: VG ceph_vg2 finished
Dec  4 05:42:30 np0005545273 blissful_ganguly[252324]: {}
Dec  4 05:42:30 np0005545273 systemd[1]: libpod-8ac1ce6bd668fa762db7e39af910ff39df83611f830c2156cc2eeedc0b1ba678.scope: Deactivated successfully.
Dec  4 05:42:30 np0005545273 podman[252308]: 2025-12-04 10:42:30.036674056 +0000 UTC m=+0.972701077 container died 8ac1ce6bd668fa762db7e39af910ff39df83611f830c2156cc2eeedc0b1ba678 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec  4 05:42:30 np0005545273 systemd[1]: libpod-8ac1ce6bd668fa762db7e39af910ff39df83611f830c2156cc2eeedc0b1ba678.scope: Consumed 1.378s CPU time.
Dec  4 05:42:30 np0005545273 systemd[1]: var-lib-containers-storage-overlay-8d3fe7ba6621176d9aa66051504346b9c8d014c5728c2300d1990f7048dd8383-merged.mount: Deactivated successfully.
Dec  4 05:42:30 np0005545273 podman[252308]: 2025-12-04 10:42:30.083749343 +0000 UTC m=+1.019776364 container remove 8ac1ce6bd668fa762db7e39af910ff39df83611f830c2156cc2eeedc0b1ba678 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_ganguly, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  4 05:42:30 np0005545273 systemd[1]: libpod-conmon-8ac1ce6bd668fa762db7e39af910ff39df83611f830c2156cc2eeedc0b1ba678.scope: Deactivated successfully.
Dec  4 05:42:30 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:42:30 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:42:30 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:42:30 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:42:31 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:42:31 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:42:31 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v971: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s wr, 3 op/s
Dec  4 05:42:31 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c5da5d86-f585-431a-b524-b52c13853cdd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:42:31 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c5da5d86-f585-431a-b524-b52c13853cdd, vol_name:cephfs) < ""
Dec  4 05:42:31 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/c5da5d86-f585-431a-b524-b52c13853cdd/cf9e35a3-eae0-419f-8f76-2382b050c1d0'.
Dec  4 05:42:31 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c5da5d86-f585-431a-b524-b52c13853cdd/.meta.tmp'
Dec  4 05:42:31 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c5da5d86-f585-431a-b524-b52c13853cdd/.meta.tmp' to config b'/volumes/_nogroup/c5da5d86-f585-431a-b524-b52c13853cdd/.meta'
Dec  4 05:42:31 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c5da5d86-f585-431a-b524-b52c13853cdd, vol_name:cephfs) < ""
Dec  4 05:42:31 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c5da5d86-f585-431a-b524-b52c13853cdd", "format": "json"}]: dispatch
Dec  4 05:42:31 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c5da5d86-f585-431a-b524-b52c13853cdd, vol_name:cephfs) < ""
Dec  4 05:42:31 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c5da5d86-f585-431a-b524-b52c13853cdd, vol_name:cephfs) < ""
Dec  4 05:42:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:42:31 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:42:32 np0005545273 podman[252446]: 2025-12-04 10:42:32.980213313 +0000 UTC m=+0.090651850 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  4 05:42:33 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v972: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 48 KiB/s wr, 6 op/s
Dec  4 05:42:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:42:35 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5da5d86-f585-431a-b524-b52c13853cdd", "auth_id": "tempest-cephx-id-195673542", "tenant_id": "094a9e5adfae45769d099eaf0d4f598c", "access_level": "rw", "format": "json"}]: dispatch
Dec  4 05:42:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume authorize, sub_name:c5da5d86-f585-431a-b524-b52c13853cdd, tenant_id:094a9e5adfae45769d099eaf0d4f598c, vol_name:cephfs) < ""
Dec  4 05:42:35 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} v 0)
Dec  4 05:42:35 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec  4 05:42:35 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID tempest-cephx-id-195673542 with tenant 094a9e5adfae45769d099eaf0d4f598c
Dec  4 05:42:35 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5da5d86-f585-431a-b524-b52c13853cdd/cf9e35a3-eae0-419f-8f76-2382b050c1d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5da5d86-f585-431a-b524-b52c13853cdd", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:42:35 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5da5d86-f585-431a-b524-b52c13853cdd/cf9e35a3-eae0-419f-8f76-2382b050c1d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5da5d86-f585-431a-b524-b52c13853cdd", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:42:35 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5da5d86-f585-431a-b524-b52c13853cdd/cf9e35a3-eae0-419f-8f76-2382b050c1d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5da5d86-f585-431a-b524-b52c13853cdd", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:42:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume authorize, sub_name:c5da5d86-f585-431a-b524-b52c13853cdd, tenant_id:094a9e5adfae45769d099eaf0d4f598c, vol_name:cephfs) < ""
Dec  4 05:42:35 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec  4 05:42:35 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5da5d86-f585-431a-b524-b52c13853cdd/cf9e35a3-eae0-419f-8f76-2382b050c1d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5da5d86-f585-431a-b524-b52c13853cdd", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:42:35 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5da5d86-f585-431a-b524-b52c13853cdd/cf9e35a3-eae0-419f-8f76-2382b050c1d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_c5da5d86-f585-431a-b524-b52c13853cdd", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:42:35 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v973: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 39 KiB/s wr, 5 op/s
Dec  4 05:42:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:42:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:42:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:42:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:42:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:42:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:42:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:42:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:42:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:42:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:42:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006671073559850907 of space, bias 1.0, pg target 0.2001322067955272 quantized to 32 (current 32)
Dec  4 05:42:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:42:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00010206047783933782 of space, bias 4.0, pg target 0.12247257340720538 quantized to 16 (current 32)
Dec  4 05:42:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:42:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  4 05:42:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:42:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:42:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:42:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 05:42:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:42:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:42:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:42:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 05:42:37 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v974: 321 pgs: 321 active+clean; 47 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 39 KiB/s wr, 5 op/s
Dec  4 05:42:39 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5da5d86-f585-431a-b524-b52c13853cdd", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec  4 05:42:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume deauthorize, sub_name:c5da5d86-f585-431a-b524-b52c13853cdd, vol_name:cephfs) < ""
Dec  4 05:42:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} v 0)
Dec  4 05:42:39 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec  4 05:42:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} v 0)
Dec  4 05:42:39 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} : dispatch
Dec  4 05:42:39 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"}]': finished
Dec  4 05:42:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume deauthorize, sub_name:c5da5d86-f585-431a-b524-b52c13853cdd, vol_name:cephfs) < ""
Dec  4 05:42:39 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5da5d86-f585-431a-b524-b52c13853cdd", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec  4 05:42:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume evict, sub_name:c5da5d86-f585-431a-b524-b52c13853cdd, vol_name:cephfs) < ""
Dec  4 05:42:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-195673542, client_metadata.root=/volumes/_nogroup/c5da5d86-f585-431a-b524-b52c13853cdd/cf9e35a3-eae0-419f-8f76-2382b050c1d0
Dec  4 05:42:39 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=tempest-cephx-id-195673542,client_metadata.root=/volumes/_nogroup/c5da5d86-f585-431a-b524-b52c13853cdd/cf9e35a3-eae0-419f-8f76-2382b050c1d0],prefix=session evict} (starting...)
Dec  4 05:42:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:42:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume evict, sub_name:c5da5d86-f585-431a-b524-b52c13853cdd, vol_name:cephfs) < ""
Dec  4 05:42:39 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec  4 05:42:39 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} : dispatch
Dec  4 05:42:39 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"}]': finished
Dec  4 05:42:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:42:39 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c5da5d86-f585-431a-b524-b52c13853cdd", "format": "json"}]: dispatch
Dec  4 05:42:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c5da5d86-f585-431a-b524-b52c13853cdd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:42:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c5da5d86-f585-431a-b524-b52c13853cdd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:42:39 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:42:39.328+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c5da5d86-f585-431a-b524-b52c13853cdd' of type subvolume
Dec  4 05:42:39 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c5da5d86-f585-431a-b524-b52c13853cdd' of type subvolume
Dec  4 05:42:39 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c5da5d86-f585-431a-b524-b52c13853cdd", "force": true, "format": "json"}]: dispatch
Dec  4 05:42:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c5da5d86-f585-431a-b524-b52c13853cdd, vol_name:cephfs) < ""
Dec  4 05:42:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c5da5d86-f585-431a-b524-b52c13853cdd'' moved to trashcan
Dec  4 05:42:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:42:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c5da5d86-f585-431a-b524-b52c13853cdd, vol_name:cephfs) < ""
Dec  4 05:42:39 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v975: 321 pgs: 321 active+clean; 48 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 50 KiB/s wr, 6 op/s
Dec  4 05:42:41 np0005545273 nova_compute[244644]: 2025-12-04 10:42:41.340 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:42:41 np0005545273 nova_compute[244644]: 2025-12-04 10:42:41.340 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  4 05:42:41 np0005545273 nova_compute[244644]: 2025-12-04 10:42:41.340 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  4 05:42:41 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v976: 321 pgs: 321 active+clean; 48 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 37 KiB/s wr, 5 op/s
Dec  4 05:42:41 np0005545273 nova_compute[244644]: 2025-12-04 10:42:41.429 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  4 05:42:42 np0005545273 nova_compute[244644]: 2025-12-04 10:42:42.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:42:42 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d7ec0481-b957-40a8-acf9-4ac33a165908", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:42:42 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d7ec0481-b957-40a8-acf9-4ac33a165908, vol_name:cephfs) < ""
Dec  4 05:42:42 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/d7ec0481-b957-40a8-acf9-4ac33a165908/a565891a-3a2d-45ae-abea-99a7488506bf'.
Dec  4 05:42:42 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d7ec0481-b957-40a8-acf9-4ac33a165908/.meta.tmp'
Dec  4 05:42:42 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d7ec0481-b957-40a8-acf9-4ac33a165908/.meta.tmp' to config b'/volumes/_nogroup/d7ec0481-b957-40a8-acf9-4ac33a165908/.meta'
Dec  4 05:42:42 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d7ec0481-b957-40a8-acf9-4ac33a165908, vol_name:cephfs) < ""
Dec  4 05:42:42 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d7ec0481-b957-40a8-acf9-4ac33a165908", "format": "json"}]: dispatch
Dec  4 05:42:42 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d7ec0481-b957-40a8-acf9-4ac33a165908, vol_name:cephfs) < ""
Dec  4 05:42:42 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d7ec0481-b957-40a8-acf9-4ac33a165908, vol_name:cephfs) < ""
Dec  4 05:42:42 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:42:42 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:42:42 np0005545273 podman[252471]: 2025-12-04 10:42:42.952034585 +0000 UTC m=+0.054764600 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  4 05:42:42 np0005545273 podman[252470]: 2025-12-04 10:42:42.983359213 +0000 UTC m=+0.087159504 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2)
Dec  4 05:42:43 np0005545273 nova_compute[244644]: 2025-12-04 10:42:43.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:42:43 np0005545273 nova_compute[244644]: 2025-12-04 10:42:43.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:42:43 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v977: 321 pgs: 321 active+clean; 48 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 63 KiB/s wr, 8 op/s
Dec  4 05:42:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:42:44 np0005545273 nova_compute[244644]: 2025-12-04 10:42:44.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:42:44 np0005545273 nova_compute[244644]: 2025-12-04 10:42:44.370 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:42:44 np0005545273 nova_compute[244644]: 2025-12-04 10:42:44.371 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:42:44 np0005545273 nova_compute[244644]: 2025-12-04 10:42:44.371 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:42:44 np0005545273 nova_compute[244644]: 2025-12-04 10:42:44.371 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  4 05:42:44 np0005545273 nova_compute[244644]: 2025-12-04 10:42:44.371 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:42:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:42:44 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/139802679' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:42:44 np0005545273 nova_compute[244644]: 2025-12-04 10:42:44.911 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:42:44 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Dec  4 05:42:44 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:42:44.955998) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  4 05:42:44 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Dec  4 05:42:44 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844964956036, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1633, "num_deletes": 257, "total_data_size": 2250600, "memory_usage": 2279296, "flush_reason": "Manual Compaction"}
Dec  4 05:42:44 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Dec  4 05:42:44 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844964974702, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 2214525, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19657, "largest_seqno": 21289, "table_properties": {"data_size": 2207086, "index_size": 4189, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17739, "raw_average_key_size": 20, "raw_value_size": 2191305, "raw_average_value_size": 2581, "num_data_blocks": 188, "num_entries": 849, "num_filter_entries": 849, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764844839, "oldest_key_time": 1764844839, "file_creation_time": 1764844964, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:42:44 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 18755 microseconds, and 6490 cpu microseconds.
Dec  4 05:42:44 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  4 05:42:44 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:42:44.974745) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 2214525 bytes OK
Dec  4 05:42:44 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:42:44.974788) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Dec  4 05:42:44 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:42:44.976939) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Dec  4 05:42:44 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:42:44.976961) EVENT_LOG_v1 {"time_micros": 1764844964976954, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  4 05:42:44 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:42:44.976980) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  4 05:42:44 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2243208, prev total WAL file size 2243208, number of live WAL files 2.
Dec  4 05:42:44 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:42:44 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:42:44.977707) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Dec  4 05:42:44 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  4 05:42:44 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(2162KB)], [47(7165KB)]
Dec  4 05:42:44 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844964977795, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9551718, "oldest_snapshot_seqno": -1}
Dec  4 05:42:45 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4611 keys, 7772554 bytes, temperature: kUnknown
Dec  4 05:42:45 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844965032241, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7772554, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7741017, "index_size": 18883, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11589, "raw_key_size": 114458, "raw_average_key_size": 24, "raw_value_size": 7657118, "raw_average_value_size": 1660, "num_data_blocks": 788, "num_entries": 4611, "num_filter_entries": 4611, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 1764844964, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:42:45 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  4 05:42:45 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:42:45.032621) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7772554 bytes
Dec  4 05:42:45 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:42:45.034209) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 175.0 rd, 142.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 7.0 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(7.8) write-amplify(3.5) OK, records in: 5146, records dropped: 535 output_compression: NoCompression
Dec  4 05:42:45 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:42:45.034226) EVENT_LOG_v1 {"time_micros": 1764844965034217, "job": 24, "event": "compaction_finished", "compaction_time_micros": 54588, "compaction_time_cpu_micros": 20320, "output_level": 6, "num_output_files": 1, "total_output_size": 7772554, "num_input_records": 5146, "num_output_records": 4611, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  4 05:42:45 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:42:45 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844965034667, "job": 24, "event": "table_file_deletion", "file_number": 49}
Dec  4 05:42:45 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:42:45 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764844965035947, "job": 24, "event": "table_file_deletion", "file_number": 47}
Dec  4 05:42:45 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:42:44.977594) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:42:45 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:42:45.035996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:42:45 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:42:45.036002) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:42:45 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:42:45.036003) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:42:45 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:42:45.036004) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:42:45 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:42:45.036006) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:42:45 np0005545273 nova_compute[244644]: 2025-12-04 10:42:45.065 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  4 05:42:45 np0005545273 nova_compute[244644]: 2025-12-04 10:42:45.066 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5048MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  4 05:42:45 np0005545273 nova_compute[244644]: 2025-12-04 10:42:45.067 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:42:45 np0005545273 nova_compute[244644]: 2025-12-04 10:42:45.067 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:42:45 np0005545273 nova_compute[244644]: 2025-12-04 10:42:45.130 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  4 05:42:45 np0005545273 nova_compute[244644]: 2025-12-04 10:42:45.131 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  4 05:42:45 np0005545273 nova_compute[244644]: 2025-12-04 10:42:45.153 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:42:45 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v978: 321 pgs: 321 active+clean; 48 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 39 KiB/s wr, 5 op/s
Dec  4 05:42:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:42:45 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1914684475' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:42:45 np0005545273 nova_compute[244644]: 2025-12-04 10:42:45.705 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:42:45 np0005545273 nova_compute[244644]: 2025-12-04 10:42:45.711 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  4 05:42:45 np0005545273 nova_compute[244644]: 2025-12-04 10:42:45.740 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  4 05:42:45 np0005545273 nova_compute[244644]: 2025-12-04 10:42:45.742 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  4 05:42:45 np0005545273 nova_compute[244644]: 2025-12-04 10:42:45.742 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:42:46 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "d7ec0481-b957-40a8-acf9-4ac33a165908", "auth_id": "tempest-cephx-id-195673542", "tenant_id": "094a9e5adfae45769d099eaf0d4f598c", "access_level": "rw", "format": "json"}]: dispatch
Dec  4 05:42:46 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume authorize, sub_name:d7ec0481-b957-40a8-acf9-4ac33a165908, tenant_id:094a9e5adfae45769d099eaf0d4f598c, vol_name:cephfs) < ""
Dec  4 05:42:46 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} v 0)
Dec  4 05:42:46 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec  4 05:42:46 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID tempest-cephx-id-195673542 with tenant 094a9e5adfae45769d099eaf0d4f598c
Dec  4 05:42:46 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/d7ec0481-b957-40a8-acf9-4ac33a165908/a565891a-3a2d-45ae-abea-99a7488506bf", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_d7ec0481-b957-40a8-acf9-4ac33a165908", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:42:46 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/d7ec0481-b957-40a8-acf9-4ac33a165908/a565891a-3a2d-45ae-abea-99a7488506bf", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_d7ec0481-b957-40a8-acf9-4ac33a165908", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:42:46 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/d7ec0481-b957-40a8-acf9-4ac33a165908/a565891a-3a2d-45ae-abea-99a7488506bf", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_d7ec0481-b957-40a8-acf9-4ac33a165908", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:42:46 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume authorize, sub_name:d7ec0481-b957-40a8-acf9-4ac33a165908, tenant_id:094a9e5adfae45769d099eaf0d4f598c, vol_name:cephfs) < ""
Dec  4 05:42:46 np0005545273 nova_compute[244644]: 2025-12-04 10:42:46.738 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:42:46 np0005545273 nova_compute[244644]: 2025-12-04 10:42:46.738 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:42:46 np0005545273 nova_compute[244644]: 2025-12-04 10:42:46.739 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:42:46 np0005545273 nova_compute[244644]: 2025-12-04 10:42:46.739 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:42:46 np0005545273 nova_compute[244644]: 2025-12-04 10:42:46.739 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  4 05:42:46 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec  4 05:42:46 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/d7ec0481-b957-40a8-acf9-4ac33a165908/a565891a-3a2d-45ae-abea-99a7488506bf", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_d7ec0481-b957-40a8-acf9-4ac33a165908", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:42:46 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/d7ec0481-b957-40a8-acf9-4ac33a165908/a565891a-3a2d-45ae-abea-99a7488506bf", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_d7ec0481-b957-40a8-acf9-4ac33a165908", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:42:47 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v979: 321 pgs: 321 active+clean; 48 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 39 KiB/s wr, 5 op/s
Dec  4 05:42:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:42:49 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v980: 321 pgs: 321 active+clean; 48 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 51 KiB/s wr, 6 op/s
Dec  4 05:42:49 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "d7ec0481-b957-40a8-acf9-4ac33a165908", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec  4 05:42:49 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume deauthorize, sub_name:d7ec0481-b957-40a8-acf9-4ac33a165908, vol_name:cephfs) < ""
Dec  4 05:42:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} v 0)
Dec  4 05:42:49 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec  4 05:42:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} v 0)
Dec  4 05:42:49 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} : dispatch
Dec  4 05:42:49 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"}]': finished
Dec  4 05:42:49 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume deauthorize, sub_name:d7ec0481-b957-40a8-acf9-4ac33a165908, vol_name:cephfs) < ""
Dec  4 05:42:49 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "d7ec0481-b957-40a8-acf9-4ac33a165908", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec  4 05:42:49 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume evict, sub_name:d7ec0481-b957-40a8-acf9-4ac33a165908, vol_name:cephfs) < ""
Dec  4 05:42:49 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-195673542, client_metadata.root=/volumes/_nogroup/d7ec0481-b957-40a8-acf9-4ac33a165908/a565891a-3a2d-45ae-abea-99a7488506bf
Dec  4 05:42:49 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=tempest-cephx-id-195673542,client_metadata.root=/volumes/_nogroup/d7ec0481-b957-40a8-acf9-4ac33a165908/a565891a-3a2d-45ae-abea-99a7488506bf],prefix=session evict} (starting...)
Dec  4 05:42:49 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:42:49 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume evict, sub_name:d7ec0481-b957-40a8-acf9-4ac33a165908, vol_name:cephfs) < ""
Dec  4 05:42:50 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d7ec0481-b957-40a8-acf9-4ac33a165908", "format": "json"}]: dispatch
Dec  4 05:42:50 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d7ec0481-b957-40a8-acf9-4ac33a165908, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:42:50 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d7ec0481-b957-40a8-acf9-4ac33a165908, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:42:50 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:42:50.045+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd7ec0481-b957-40a8-acf9-4ac33a165908' of type subvolume
Dec  4 05:42:50 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd7ec0481-b957-40a8-acf9-4ac33a165908' of type subvolume
Dec  4 05:42:50 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d7ec0481-b957-40a8-acf9-4ac33a165908", "force": true, "format": "json"}]: dispatch
Dec  4 05:42:50 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d7ec0481-b957-40a8-acf9-4ac33a165908, vol_name:cephfs) < ""
Dec  4 05:42:50 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d7ec0481-b957-40a8-acf9-4ac33a165908'' moved to trashcan
Dec  4 05:42:50 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:42:50 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d7ec0481-b957-40a8-acf9-4ac33a165908, vol_name:cephfs) < ""
Dec  4 05:42:50 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec  4 05:42:50 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} : dispatch
Dec  4 05:42:50 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"}]': finished
Dec  4 05:42:51 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v981: 321 pgs: 321 active+clean; 48 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 40 KiB/s wr, 5 op/s
Dec  4 05:42:53 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "auth_id": "tempest-cephx-id-195673542", "tenant_id": "094a9e5adfae45769d099eaf0d4f598c", "access_level": "rw", "format": "json"}]: dispatch
Dec  4 05:42:53 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume authorize, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, tenant_id:094a9e5adfae45769d099eaf0d4f598c, vol_name:cephfs) < ""
Dec  4 05:42:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} v 0)
Dec  4 05:42:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec  4 05:42:53 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID tempest-cephx-id-195673542 with tenant 094a9e5adfae45769d099eaf0d4f598c
Dec  4 05:42:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:42:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:42:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:42:53 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume authorize, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, tenant_id:094a9e5adfae45769d099eaf0d4f598c, vol_name:cephfs) < ""
Dec  4 05:42:53 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec  4 05:42:53 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:42:53 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:42:53 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v982: 321 pgs: 321 active+clean; 48 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 61 KiB/s wr, 7 op/s
Dec  4 05:42:54 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "24d9d739-98c3-41b3-9e91-5fbf698f4944", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:42:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:24d9d739-98c3-41b3-9e91-5fbf698f4944, vol_name:cephfs) < ""
Dec  4 05:42:54 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/24d9d739-98c3-41b3-9e91-5fbf698f4944/12901b2f-0604-4ca2-8ff9-99a77556cca5'.
Dec  4 05:42:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/24d9d739-98c3-41b3-9e91-5fbf698f4944/.meta.tmp'
Dec  4 05:42:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/24d9d739-98c3-41b3-9e91-5fbf698f4944/.meta.tmp' to config b'/volumes/_nogroup/24d9d739-98c3-41b3-9e91-5fbf698f4944/.meta'
Dec  4 05:42:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:24d9d739-98c3-41b3-9e91-5fbf698f4944, vol_name:cephfs) < ""
Dec  4 05:42:54 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "24d9d739-98c3-41b3-9e91-5fbf698f4944", "format": "json"}]: dispatch
Dec  4 05:42:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:24d9d739-98c3-41b3-9e91-5fbf698f4944, vol_name:cephfs) < ""
Dec  4 05:42:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:24d9d739-98c3-41b3-9e91-5fbf698f4944, vol_name:cephfs) < ""
Dec  4 05:42:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:42:54 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:42:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:42:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:42:54.908 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:42:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:42:54.909 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:42:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:42:54.909 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:42:55 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v983: 321 pgs: 321 active+clean; 48 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 34 KiB/s wr, 4 op/s
Dec  4 05:42:57 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v984: 321 pgs: 321 active+clean; 48 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 34 KiB/s wr, 5 op/s
Dec  4 05:42:57 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec  4 05:42:57 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume deauthorize, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec  4 05:42:57 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} v 0)
Dec  4 05:42:57 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec  4 05:42:57 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} v 0)
Dec  4 05:42:57 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} : dispatch
Dec  4 05:42:57 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"}]': finished
Dec  4 05:42:57 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume deauthorize, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec  4 05:42:57 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec  4 05:42:57 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume evict, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec  4 05:42:57 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-195673542, client_metadata.root=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8
Dec  4 05:42:57 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=tempest-cephx-id-195673542,client_metadata.root=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8],prefix=session evict} (starting...)
Dec  4 05:42:57 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:42:57 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume evict, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec  4 05:42:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:42:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:42:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:42:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:42:58 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec  4 05:42:58 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} : dispatch
Dec  4 05:42:58 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"}]': finished
Dec  4 05:42:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:42:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:42:58 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "24d9d739-98c3-41b3-9e91-5fbf698f4944", "format": "json"}]: dispatch
Dec  4 05:42:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:24d9d739-98c3-41b3-9e91-5fbf698f4944, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:42:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:24d9d739-98c3-41b3-9e91-5fbf698f4944, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:42:58 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:42:58.621+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '24d9d739-98c3-41b3-9e91-5fbf698f4944' of type subvolume
Dec  4 05:42:58 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '24d9d739-98c3-41b3-9e91-5fbf698f4944' of type subvolume
Dec  4 05:42:58 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "24d9d739-98c3-41b3-9e91-5fbf698f4944", "force": true, "format": "json"}]: dispatch
Dec  4 05:42:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:24d9d739-98c3-41b3-9e91-5fbf698f4944, vol_name:cephfs) < ""
Dec  4 05:42:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/24d9d739-98c3-41b3-9e91-5fbf698f4944'' moved to trashcan
Dec  4 05:42:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:42:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:24d9d739-98c3-41b3-9e91-5fbf698f4944, vol_name:cephfs) < ""
Dec  4 05:42:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:42:59 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v985: 321 pgs: 321 active+clean; 49 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 60 KiB/s wr, 8 op/s
Dec  4 05:43:01 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "auth_id": "tempest-cephx-id-195673542", "tenant_id": "094a9e5adfae45769d099eaf0d4f598c", "access_level": "rw", "format": "json"}]: dispatch
Dec  4 05:43:01 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume authorize, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, tenant_id:094a9e5adfae45769d099eaf0d4f598c, vol_name:cephfs) < ""
Dec  4 05:43:01 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} v 0)
Dec  4 05:43:01 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec  4 05:43:01 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID tempest-cephx-id-195673542 with tenant 094a9e5adfae45769d099eaf0d4f598c
Dec  4 05:43:01 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:43:01 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:43:01 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:43:01 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume authorize, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, tenant_id:094a9e5adfae45769d099eaf0d4f598c, vol_name:cephfs) < ""
Dec  4 05:43:01 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec  4 05:43:01 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:43:01 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:43:01 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v986: 321 pgs: 321 active+clean; 49 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 47 KiB/s wr, 6 op/s
Dec  4 05:43:02 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "97fc4d92-2e4d-40fb-86bf-ef965853aa37", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:43:02 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:97fc4d92-2e4d-40fb-86bf-ef965853aa37, vol_name:cephfs) < ""
Dec  4 05:43:02 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/97fc4d92-2e4d-40fb-86bf-ef965853aa37/9a911faf-043c-4c37-9142-455a2d8f4429'.
Dec  4 05:43:02 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/97fc4d92-2e4d-40fb-86bf-ef965853aa37/.meta.tmp'
Dec  4 05:43:02 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/97fc4d92-2e4d-40fb-86bf-ef965853aa37/.meta.tmp' to config b'/volumes/_nogroup/97fc4d92-2e4d-40fb-86bf-ef965853aa37/.meta'
Dec  4 05:43:02 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:97fc4d92-2e4d-40fb-86bf-ef965853aa37, vol_name:cephfs) < ""
Dec  4 05:43:02 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "97fc4d92-2e4d-40fb-86bf-ef965853aa37", "format": "json"}]: dispatch
Dec  4 05:43:02 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:97fc4d92-2e4d-40fb-86bf-ef965853aa37, vol_name:cephfs) < ""
Dec  4 05:43:02 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:97fc4d92-2e4d-40fb-86bf-ef965853aa37, vol_name:cephfs) < ""
Dec  4 05:43:02 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:43:02 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:43:03 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v987: 321 pgs: 321 active+clean; 49 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 78 KiB/s wr, 10 op/s
Dec  4 05:43:03 np0005545273 podman[252563]: 2025-12-04 10:43:03.959702031 +0000 UTC m=+0.062867441 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  4 05:43:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:43:04 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec  4 05:43:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume deauthorize, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec  4 05:43:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} v 0)
Dec  4 05:43:04 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec  4 05:43:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} v 0)
Dec  4 05:43:04 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} : dispatch
Dec  4 05:43:04 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"}]': finished
Dec  4 05:43:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume deauthorize, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec  4 05:43:04 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec  4 05:43:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume evict, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec  4 05:43:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-195673542, client_metadata.root=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8
Dec  4 05:43:04 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=tempest-cephx-id-195673542,client_metadata.root=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8],prefix=session evict} (starting...)
Dec  4 05:43:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:43:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume evict, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec  4 05:43:05 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec  4 05:43:05 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} : dispatch
Dec  4 05:43:05 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"}]': finished
Dec  4 05:43:05 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v988: 321 pgs: 321 active+clean; 49 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 57 KiB/s wr, 7 op/s
Dec  4 05:43:06 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "97fc4d92-2e4d-40fb-86bf-ef965853aa37", "format": "json"}]: dispatch
Dec  4 05:43:06 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:97fc4d92-2e4d-40fb-86bf-ef965853aa37, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:43:06 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:97fc4d92-2e4d-40fb-86bf-ef965853aa37, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:43:06 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:43:06.154+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '97fc4d92-2e4d-40fb-86bf-ef965853aa37' of type subvolume
Dec  4 05:43:06 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '97fc4d92-2e4d-40fb-86bf-ef965853aa37' of type subvolume
Dec  4 05:43:06 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "97fc4d92-2e4d-40fb-86bf-ef965853aa37", "force": true, "format": "json"}]: dispatch
Dec  4 05:43:06 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:97fc4d92-2e4d-40fb-86bf-ef965853aa37, vol_name:cephfs) < ""
Dec  4 05:43:06 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/97fc4d92-2e4d-40fb-86bf-ef965853aa37'' moved to trashcan
Dec  4 05:43:06 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:43:06 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:97fc4d92-2e4d-40fb-86bf-ef965853aa37, vol_name:cephfs) < ""
Dec  4 05:43:07 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v989: 321 pgs: 321 active+clean; 49 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 57 KiB/s wr, 8 op/s
Dec  4 05:43:08 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "auth_id": "tempest-cephx-id-195673542", "tenant_id": "094a9e5adfae45769d099eaf0d4f598c", "access_level": "rw", "format": "json"}]: dispatch
Dec  4 05:43:08 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume authorize, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, tenant_id:094a9e5adfae45769d099eaf0d4f598c, vol_name:cephfs) < ""
Dec  4 05:43:08 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} v 0)
Dec  4 05:43:08 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec  4 05:43:08 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID tempest-cephx-id-195673542 with tenant 094a9e5adfae45769d099eaf0d4f598c
Dec  4 05:43:08 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:43:08 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:43:08 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec  4 05:43:08 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:43:08 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume authorize, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, tenant_id:094a9e5adfae45769d099eaf0d4f598c, vol_name:cephfs) < ""
Dec  4 05:43:09 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:43:09 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:43:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:43:09 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v990: 321 pgs: 321 active+clean; 49 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 78 KiB/s wr, 10 op/s
Dec  4 05:43:10 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:43:10.248 156095 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'aa:78:67', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:d2:c7:24:ee:78'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  4 05:43:10 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:43:10.250 156095 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  4 05:43:11 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v991: 321 pgs: 321 active+clean; 49 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 53 KiB/s wr, 8 op/s
Dec  4 05:43:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  4 05:43:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1768623676' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec  4 05:43:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  4 05:43:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1768623676' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec  4 05:43:12 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec  4 05:43:12 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume deauthorize, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec  4 05:43:12 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} v 0)
Dec  4 05:43:12 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec  4 05:43:12 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} v 0)
Dec  4 05:43:12 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} : dispatch
Dec  4 05:43:12 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"}]': finished
Dec  4 05:43:12 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume deauthorize, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec  4 05:43:12 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec  4 05:43:12 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume evict, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec  4 05:43:12 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-195673542, client_metadata.root=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8
Dec  4 05:43:12 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=tempest-cephx-id-195673542,client_metadata.root=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8],prefix=session evict} (starting...)
Dec  4 05:43:12 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:43:12 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume evict, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec  4 05:43:12 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec  4 05:43:12 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} : dispatch
Dec  4 05:43:12 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"}]': finished
Dec  4 05:43:13 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v992: 321 pgs: 321 active+clean; 49 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 80 KiB/s wr, 11 op/s
Dec  4 05:43:13 np0005545273 podman[252590]: 2025-12-04 10:43:13.603679489 +0000 UTC m=+0.052822862 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Dec  4 05:43:13 np0005545273 podman[252589]: 2025-12-04 10:43:13.650961923 +0000 UTC m=+0.104921994 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible)
Dec  4 05:43:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:43:15 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v993: 321 pgs: 321 active+clean; 49 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 49 KiB/s wr, 7 op/s
Dec  4 05:43:15 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "auth_id": "tempest-cephx-id-195673542", "tenant_id": "094a9e5adfae45769d099eaf0d4f598c", "access_level": "rw", "format": "json"}]: dispatch
Dec  4 05:43:15 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume authorize, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, tenant_id:094a9e5adfae45769d099eaf0d4f598c, vol_name:cephfs) < ""
Dec  4 05:43:15 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} v 0)
Dec  4 05:43:15 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec  4 05:43:15 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID tempest-cephx-id-195673542 with tenant 094a9e5adfae45769d099eaf0d4f598c
Dec  4 05:43:15 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:43:15 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:43:15 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:43:15 np0005545273 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec  4 05:43:15 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume authorize, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, tenant_id:094a9e5adfae45769d099eaf0d4f598c, vol_name:cephfs) < ""
Dec  4 05:43:16 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec  4 05:43:16 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:43:16 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-195673542", "caps": ["mds", "allow rw path=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_ee821ced-1eec-43e8-af63-bd95973cd67b", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:43:17 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:43:17.252 156095 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=565580d5-3422-4e11-b563-3f1a3db67238, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  4 05:43:17 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v994: 321 pgs: 321 active+clean; 49 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 49 KiB/s wr, 8 op/s
Dec  4 05:43:18 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1276f5c4-3479-4622-a6c1-a1fd0508feb3", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:43:18 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1276f5c4-3479-4622-a6c1-a1fd0508feb3, vol_name:cephfs) < ""
Dec  4 05:43:18 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/1276f5c4-3479-4622-a6c1-a1fd0508feb3/e2160647-d792-4be6-83e3-0a77d5539fd0'.
Dec  4 05:43:18 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1276f5c4-3479-4622-a6c1-a1fd0508feb3/.meta.tmp'
Dec  4 05:43:18 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1276f5c4-3479-4622-a6c1-a1fd0508feb3/.meta.tmp' to config b'/volumes/_nogroup/1276f5c4-3479-4622-a6c1-a1fd0508feb3/.meta'
Dec  4 05:43:18 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1276f5c4-3479-4622-a6c1-a1fd0508feb3, vol_name:cephfs) < ""
Dec  4 05:43:18 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1276f5c4-3479-4622-a6c1-a1fd0508feb3", "format": "json"}]: dispatch
Dec  4 05:43:18 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1276f5c4-3479-4622-a6c1-a1fd0508feb3, vol_name:cephfs) < ""
Dec  4 05:43:18 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1276f5c4-3479-4622-a6c1-a1fd0508feb3, vol_name:cephfs) < ""
Dec  4 05:43:18 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:43:18 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:43:19 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec  4 05:43:19 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume deauthorize, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec  4 05:43:19 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} v 0)
Dec  4 05:43:19 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec  4 05:43:19 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} v 0)
Dec  4 05:43:19 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} : dispatch
Dec  4 05:43:19 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"}]': finished
Dec  4 05:43:19 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume deauthorize, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec  4 05:43:19 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "auth_id": "tempest-cephx-id-195673542", "format": "json"}]: dispatch
Dec  4 05:43:19 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume evict, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec  4 05:43:19 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-195673542, client_metadata.root=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8
Dec  4 05:43:19 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=tempest-cephx-id-195673542,client_metadata.root=/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b/356d3e8a-f0f7-472b-9493-cce2a25c84e8],prefix=session evict} (starting...)
Dec  4 05:43:19 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:43:19 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-195673542, format:json, prefix:fs subvolume evict, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec  4 05:43:19 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:43:19 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v995: 321 pgs: 321 active+clean; 50 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 65 KiB/s wr, 9 op/s
Dec  4 05:43:19 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-195673542", "format": "json"} : dispatch
Dec  4 05:43:19 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"} : dispatch
Dec  4 05:43:19 np0005545273 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  4 05:43:19 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-195673542"}]': finished
Dec  4 05:43:19 np0005545273 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  4 05:43:20 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "dba135ca-99df-42d3-a2b3-b27ad79995b7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:43:20 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:dba135ca-99df-42d3-a2b3-b27ad79995b7, vol_name:cephfs) < ""
Dec  4 05:43:20 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/dba135ca-99df-42d3-a2b3-b27ad79995b7/a711eac5-4d18-4be8-8bdb-d9f7a5922442'.
Dec  4 05:43:20 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/dba135ca-99df-42d3-a2b3-b27ad79995b7/.meta.tmp'
Dec  4 05:43:20 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/dba135ca-99df-42d3-a2b3-b27ad79995b7/.meta.tmp' to config b'/volumes/_nogroup/dba135ca-99df-42d3-a2b3-b27ad79995b7/.meta'
Dec  4 05:43:20 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:dba135ca-99df-42d3-a2b3-b27ad79995b7, vol_name:cephfs) < ""
Dec  4 05:43:20 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "dba135ca-99df-42d3-a2b3-b27ad79995b7", "format": "json"}]: dispatch
Dec  4 05:43:20 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dba135ca-99df-42d3-a2b3-b27ad79995b7, vol_name:cephfs) < ""
Dec  4 05:43:20 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dba135ca-99df-42d3-a2b3-b27ad79995b7, vol_name:cephfs) < ""
Dec  4 05:43:20 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:43:20 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:43:21 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v996: 321 pgs: 321 active+clean; 50 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 45 KiB/s wr, 7 op/s
Dec  4 05:43:21 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "1276f5c4-3479-4622-a6c1-a1fd0508feb3", "snap_name": "6f1499c3-6375-4ad6-94a0-953306cf2d1f", "format": "json"}]: dispatch
Dec  4 05:43:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:6f1499c3-6375-4ad6-94a0-953306cf2d1f, sub_name:1276f5c4-3479-4622-a6c1-a1fd0508feb3, vol_name:cephfs) < ""
Dec  4 05:43:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:6f1499c3-6375-4ad6-94a0-953306cf2d1f, sub_name:1276f5c4-3479-4622-a6c1-a1fd0508feb3, vol_name:cephfs) < ""
Dec  4 05:43:23 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "format": "json"}]: dispatch
Dec  4 05:43:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ee821ced-1eec-43e8-af63-bd95973cd67b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:43:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ee821ced-1eec-43e8-af63-bd95973cd67b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:43:23 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:43:23.147+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ee821ced-1eec-43e8-af63-bd95973cd67b' of type subvolume
Dec  4 05:43:23 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ee821ced-1eec-43e8-af63-bd95973cd67b' of type subvolume
Dec  4 05:43:23 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ee821ced-1eec-43e8-af63-bd95973cd67b", "force": true, "format": "json"}]: dispatch
Dec  4 05:43:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec  4 05:43:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ee821ced-1eec-43e8-af63-bd95973cd67b'' moved to trashcan
Dec  4 05:43:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:43:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ee821ced-1eec-43e8-af63-bd95973cd67b, vol_name:cephfs) < ""
Dec  4 05:43:23 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v997: 321 pgs: 321 active+clean; 50 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 70 KiB/s wr, 10 op/s
Dec  4 05:43:23 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "efb32910-eddf-42fc-9d2f-7022478fa2af", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:43:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:efb32910-eddf-42fc-9d2f-7022478fa2af, vol_name:cephfs) < ""
Dec  4 05:43:23 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/efb32910-eddf-42fc-9d2f-7022478fa2af/251c0bc7-d836-4231-96c8-6099843232d7'.
Dec  4 05:43:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/efb32910-eddf-42fc-9d2f-7022478fa2af/.meta.tmp'
Dec  4 05:43:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/efb32910-eddf-42fc-9d2f-7022478fa2af/.meta.tmp' to config b'/volumes/_nogroup/efb32910-eddf-42fc-9d2f-7022478fa2af/.meta'
Dec  4 05:43:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:efb32910-eddf-42fc-9d2f-7022478fa2af, vol_name:cephfs) < ""
Dec  4 05:43:23 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "efb32910-eddf-42fc-9d2f-7022478fa2af", "format": "json"}]: dispatch
Dec  4 05:43:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:efb32910-eddf-42fc-9d2f-7022478fa2af, vol_name:cephfs) < ""
Dec  4 05:43:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:efb32910-eddf-42fc-9d2f-7022478fa2af, vol_name:cephfs) < ""
Dec  4 05:43:23 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:43:23 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:43:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:43:25 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v998: 321 pgs: 321 active+clean; 50 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 43 KiB/s wr, 6 op/s
Dec  4 05:43:26 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1276f5c4-3479-4622-a6c1-a1fd0508feb3", "snap_name": "6f1499c3-6375-4ad6-94a0-953306cf2d1f_d1f5a442-8701-446e-ae89-917b6794340b", "force": true, "format": "json"}]: dispatch
Dec  4 05:43:26 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6f1499c3-6375-4ad6-94a0-953306cf2d1f_d1f5a442-8701-446e-ae89-917b6794340b, sub_name:1276f5c4-3479-4622-a6c1-a1fd0508feb3, vol_name:cephfs) < ""
Dec  4 05:43:26 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1276f5c4-3479-4622-a6c1-a1fd0508feb3/.meta.tmp'
Dec  4 05:43:26 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1276f5c4-3479-4622-a6c1-a1fd0508feb3/.meta.tmp' to config b'/volumes/_nogroup/1276f5c4-3479-4622-a6c1-a1fd0508feb3/.meta'
Dec  4 05:43:26 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6f1499c3-6375-4ad6-94a0-953306cf2d1f_d1f5a442-8701-446e-ae89-917b6794340b, sub_name:1276f5c4-3479-4622-a6c1-a1fd0508feb3, vol_name:cephfs) < ""
Dec  4 05:43:26 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1276f5c4-3479-4622-a6c1-a1fd0508feb3", "snap_name": "6f1499c3-6375-4ad6-94a0-953306cf2d1f", "force": true, "format": "json"}]: dispatch
Dec  4 05:43:26 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6f1499c3-6375-4ad6-94a0-953306cf2d1f, sub_name:1276f5c4-3479-4622-a6c1-a1fd0508feb3, vol_name:cephfs) < ""
Dec  4 05:43:26 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1276f5c4-3479-4622-a6c1-a1fd0508feb3/.meta.tmp'
Dec  4 05:43:26 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1276f5c4-3479-4622-a6c1-a1fd0508feb3/.meta.tmp' to config b'/volumes/_nogroup/1276f5c4-3479-4622-a6c1-a1fd0508feb3/.meta'
Dec  4 05:43:26 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6f1499c3-6375-4ad6-94a0-953306cf2d1f, sub_name:1276f5c4-3479-4622-a6c1-a1fd0508feb3, vol_name:cephfs) < ""
Dec  4 05:43:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:43:26
Dec  4 05:43:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:43:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:43:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['vms', 'backups', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'images', 'volumes', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'cephfs.cephfs.meta']
Dec  4 05:43:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:43:26 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "efb32910-eddf-42fc-9d2f-7022478fa2af", "format": "json"}]: dispatch
Dec  4 05:43:26 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:efb32910-eddf-42fc-9d2f-7022478fa2af, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:43:26 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:efb32910-eddf-42fc-9d2f-7022478fa2af, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:43:26 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:43:26.945+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'efb32910-eddf-42fc-9d2f-7022478fa2af' of type subvolume
Dec  4 05:43:26 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'efb32910-eddf-42fc-9d2f-7022478fa2af' of type subvolume
Dec  4 05:43:26 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "efb32910-eddf-42fc-9d2f-7022478fa2af", "force": true, "format": "json"}]: dispatch
Dec  4 05:43:26 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:efb32910-eddf-42fc-9d2f-7022478fa2af, vol_name:cephfs) < ""
Dec  4 05:43:26 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/efb32910-eddf-42fc-9d2f-7022478fa2af'' moved to trashcan
Dec  4 05:43:26 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:43:26 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:efb32910-eddf-42fc-9d2f-7022478fa2af, vol_name:cephfs) < ""
Dec  4 05:43:27 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v999: 321 pgs: 321 active+clean; 50 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 43 KiB/s wr, 6 op/s
Dec  4 05:43:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:43:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:43:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:43:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:43:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:43:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:43:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:43:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:43:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:43:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:43:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:43:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:43:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:43:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:43:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:43:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:43:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:43:29 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1000: 321 pgs: 321 active+clean; 50 MiB data, 240 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 73 KiB/s wr, 9 op/s
Dec  4 05:43:30 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1276f5c4-3479-4622-a6c1-a1fd0508feb3", "format": "json"}]: dispatch
Dec  4 05:43:30 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1276f5c4-3479-4622-a6c1-a1fd0508feb3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:43:30 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1276f5c4-3479-4622-a6c1-a1fd0508feb3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:43:30 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:43:30.140+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1276f5c4-3479-4622-a6c1-a1fd0508feb3' of type subvolume
Dec  4 05:43:30 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1276f5c4-3479-4622-a6c1-a1fd0508feb3' of type subvolume
Dec  4 05:43:30 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1276f5c4-3479-4622-a6c1-a1fd0508feb3", "force": true, "format": "json"}]: dispatch
Dec  4 05:43:30 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1276f5c4-3479-4622-a6c1-a1fd0508feb3, vol_name:cephfs) < ""
Dec  4 05:43:30 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1276f5c4-3479-4622-a6c1-a1fd0508feb3'' moved to trashcan
Dec  4 05:43:30 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:43:30 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1276f5c4-3479-4622-a6c1-a1fd0508feb3, vol_name:cephfs) < ""
Dec  4 05:43:30 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dba135ca-99df-42d3-a2b3-b27ad79995b7", "format": "json"}]: dispatch
Dec  4 05:43:30 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:dba135ca-99df-42d3-a2b3-b27ad79995b7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:43:30 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:dba135ca-99df-42d3-a2b3-b27ad79995b7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:43:30 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:43:30.591+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dba135ca-99df-42d3-a2b3-b27ad79995b7' of type subvolume
Dec  4 05:43:30 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dba135ca-99df-42d3-a2b3-b27ad79995b7' of type subvolume
Dec  4 05:43:30 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "dba135ca-99df-42d3-a2b3-b27ad79995b7", "force": true, "format": "json"}]: dispatch
Dec  4 05:43:30 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dba135ca-99df-42d3-a2b3-b27ad79995b7, vol_name:cephfs) < ""
Dec  4 05:43:30 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/dba135ca-99df-42d3-a2b3-b27ad79995b7'' moved to trashcan
Dec  4 05:43:30 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:43:30 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dba135ca-99df-42d3-a2b3-b27ad79995b7, vol_name:cephfs) < ""
Dec  4 05:43:30 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:43:30 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:43:30 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:43:30 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:43:30 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:43:30 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:43:30 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:43:30 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:43:30 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:43:30 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:43:30 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:43:30 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:43:31 np0005545273 podman[252780]: 2025-12-04 10:43:31.350018167 +0000 UTC m=+0.052183516 container create ff1d1cf0e4f1fd5c49375c4cd2afb433cdf3c570443b5b6f6fd2c5c4cbaa043d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec  4 05:43:31 np0005545273 systemd[1]: Started libpod-conmon-ff1d1cf0e4f1fd5c49375c4cd2afb433cdf3c570443b5b6f6fd2c5c4cbaa043d.scope.
Dec  4 05:43:31 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1001: 321 pgs: 321 active+clean; 50 MiB data, 240 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 57 KiB/s wr, 7 op/s
Dec  4 05:43:31 np0005545273 podman[252780]: 2025-12-04 10:43:31.330456502 +0000 UTC m=+0.032621831 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:43:31 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:43:31 np0005545273 podman[252780]: 2025-12-04 10:43:31.444377118 +0000 UTC m=+0.146542437 container init ff1d1cf0e4f1fd5c49375c4cd2afb433cdf3c570443b5b6f6fd2c5c4cbaa043d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_cori, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec  4 05:43:31 np0005545273 podman[252780]: 2025-12-04 10:43:31.451140496 +0000 UTC m=+0.153305805 container start ff1d1cf0e4f1fd5c49375c4cd2afb433cdf3c570443b5b6f6fd2c5c4cbaa043d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_cori, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  4 05:43:31 np0005545273 podman[252780]: 2025-12-04 10:43:31.455199157 +0000 UTC m=+0.157364466 container attach ff1d1cf0e4f1fd5c49375c4cd2afb433cdf3c570443b5b6f6fd2c5c4cbaa043d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_cori, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:43:31 np0005545273 romantic_cori[252796]: 167 167
Dec  4 05:43:31 np0005545273 systemd[1]: libpod-ff1d1cf0e4f1fd5c49375c4cd2afb433cdf3c570443b5b6f6fd2c5c4cbaa043d.scope: Deactivated successfully.
Dec  4 05:43:31 np0005545273 conmon[252796]: conmon ff1d1cf0e4f1fd5c4937 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ff1d1cf0e4f1fd5c49375c4cd2afb433cdf3c570443b5b6f6fd2c5c4cbaa043d.scope/container/memory.events
Dec  4 05:43:31 np0005545273 podman[252780]: 2025-12-04 10:43:31.45975898 +0000 UTC m=+0.161924289 container died ff1d1cf0e4f1fd5c49375c4cd2afb433cdf3c570443b5b6f6fd2c5c4cbaa043d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_cori, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:43:31 np0005545273 systemd[1]: var-lib-containers-storage-overlay-fbc2c7fc9435236e73ee9799e57dc82fc6d68180647fae5fee31f0adb8b985a5-merged.mount: Deactivated successfully.
Dec  4 05:43:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Dec  4 05:43:31 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:43:31 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:43:31 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:43:31 np0005545273 podman[252780]: 2025-12-04 10:43:31.498544793 +0000 UTC m=+0.200710112 container remove ff1d1cf0e4f1fd5c49375c4cd2afb433cdf3c570443b5b6f6fd2c5c4cbaa043d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec  4 05:43:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Dec  4 05:43:31 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Dec  4 05:43:31 np0005545273 systemd[1]: libpod-conmon-ff1d1cf0e4f1fd5c49375c4cd2afb433cdf3c570443b5b6f6fd2c5c4cbaa043d.scope: Deactivated successfully.
Dec  4 05:43:31 np0005545273 podman[252822]: 2025-12-04 10:43:31.69992016 +0000 UTC m=+0.041263436 container create 274ae1f722a6131e6ead2b8c5e61015ec94450704648f8c0d22ff1c1e2cfe009 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_turing, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec  4 05:43:31 np0005545273 systemd[1]: Started libpod-conmon-274ae1f722a6131e6ead2b8c5e61015ec94450704648f8c0d22ff1c1e2cfe009.scope.
Dec  4 05:43:31 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:43:31 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf0bcd912dcfff03b1f987c16f2414995d4f69a0d696eefe1c3c06758b97d591/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:43:31 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf0bcd912dcfff03b1f987c16f2414995d4f69a0d696eefe1c3c06758b97d591/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:43:31 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf0bcd912dcfff03b1f987c16f2414995d4f69a0d696eefe1c3c06758b97d591/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:43:31 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf0bcd912dcfff03b1f987c16f2414995d4f69a0d696eefe1c3c06758b97d591/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:43:31 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf0bcd912dcfff03b1f987c16f2414995d4f69a0d696eefe1c3c06758b97d591/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:43:31 np0005545273 podman[252822]: 2025-12-04 10:43:31.769337321 +0000 UTC m=+0.110680627 container init 274ae1f722a6131e6ead2b8c5e61015ec94450704648f8c0d22ff1c1e2cfe009 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_turing, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:43:31 np0005545273 podman[252822]: 2025-12-04 10:43:31.777433002 +0000 UTC m=+0.118776288 container start 274ae1f722a6131e6ead2b8c5e61015ec94450704648f8c0d22ff1c1e2cfe009 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:43:31 np0005545273 podman[252822]: 2025-12-04 10:43:31.682665151 +0000 UTC m=+0.024008457 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:43:31 np0005545273 podman[252822]: 2025-12-04 10:43:31.781500244 +0000 UTC m=+0.122843530 container attach 274ae1f722a6131e6ead2b8c5e61015ec94450704648f8c0d22ff1c1e2cfe009 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_turing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec  4 05:43:32 np0005545273 goofy_turing[252838]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:43:32 np0005545273 goofy_turing[252838]: --> All data devices are unavailable
Dec  4 05:43:32 np0005545273 systemd[1]: libpod-274ae1f722a6131e6ead2b8c5e61015ec94450704648f8c0d22ff1c1e2cfe009.scope: Deactivated successfully.
Dec  4 05:43:32 np0005545273 podman[252822]: 2025-12-04 10:43:32.255770142 +0000 UTC m=+0.597113428 container died 274ae1f722a6131e6ead2b8c5e61015ec94450704648f8c0d22ff1c1e2cfe009 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:43:32 np0005545273 systemd[1]: var-lib-containers-storage-overlay-bf0bcd912dcfff03b1f987c16f2414995d4f69a0d696eefe1c3c06758b97d591-merged.mount: Deactivated successfully.
Dec  4 05:43:32 np0005545273 podman[252822]: 2025-12-04 10:43:32.299170659 +0000 UTC m=+0.640513945 container remove 274ae1f722a6131e6ead2b8c5e61015ec94450704648f8c0d22ff1c1e2cfe009 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:43:32 np0005545273 systemd[1]: libpod-conmon-274ae1f722a6131e6ead2b8c5e61015ec94450704648f8c0d22ff1c1e2cfe009.scope: Deactivated successfully.
Dec  4 05:43:32 np0005545273 podman[252931]: 2025-12-04 10:43:32.766961996 +0000 UTC m=+0.040453345 container create b8d186f2ff8382bc8491279d25667417b8519a02e602b878d96377ade989bdbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:43:32 np0005545273 systemd[1]: Started libpod-conmon-b8d186f2ff8382bc8491279d25667417b8519a02e602b878d96377ade989bdbe.scope.
Dec  4 05:43:32 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:43:32 np0005545273 podman[252931]: 2025-12-04 10:43:32.74940779 +0000 UTC m=+0.022899169 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:43:32 np0005545273 podman[252931]: 2025-12-04 10:43:32.84530956 +0000 UTC m=+0.118800919 container init b8d186f2ff8382bc8491279d25667417b8519a02e602b878d96377ade989bdbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_torvalds, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec  4 05:43:32 np0005545273 podman[252931]: 2025-12-04 10:43:32.85177775 +0000 UTC m=+0.125269099 container start b8d186f2ff8382bc8491279d25667417b8519a02e602b878d96377ade989bdbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:43:32 np0005545273 podman[252931]: 2025-12-04 10:43:32.855226996 +0000 UTC m=+0.128718345 container attach b8d186f2ff8382bc8491279d25667417b8519a02e602b878d96377ade989bdbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_torvalds, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:43:32 np0005545273 peaceful_torvalds[252948]: 167 167
Dec  4 05:43:32 np0005545273 systemd[1]: libpod-b8d186f2ff8382bc8491279d25667417b8519a02e602b878d96377ade989bdbe.scope: Deactivated successfully.
Dec  4 05:43:32 np0005545273 podman[252931]: 2025-12-04 10:43:32.85905591 +0000 UTC m=+0.132547279 container died b8d186f2ff8382bc8491279d25667417b8519a02e602b878d96377ade989bdbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_torvalds, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:43:32 np0005545273 systemd[1]: var-lib-containers-storage-overlay-aa1ae07b434d91ce95445ac4aba202752cbf83048dccc252c3788af9729b879d-merged.mount: Deactivated successfully.
Dec  4 05:43:32 np0005545273 podman[252931]: 2025-12-04 10:43:32.897823743 +0000 UTC m=+0.171315092 container remove b8d186f2ff8382bc8491279d25667417b8519a02e602b878d96377ade989bdbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  4 05:43:32 np0005545273 systemd[1]: libpod-conmon-b8d186f2ff8382bc8491279d25667417b8519a02e602b878d96377ade989bdbe.scope: Deactivated successfully.
Dec  4 05:43:33 np0005545273 podman[252970]: 2025-12-04 10:43:33.057133256 +0000 UTC m=+0.043520961 container create 25d035601b48b0784df806fcef5e0367cd33adf9abb156c601efae0c9258e082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_darwin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec  4 05:43:33 np0005545273 systemd[1]: Started libpod-conmon-25d035601b48b0784df806fcef5e0367cd33adf9abb156c601efae0c9258e082.scope.
Dec  4 05:43:33 np0005545273 podman[252970]: 2025-12-04 10:43:33.036494833 +0000 UTC m=+0.022882558 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:43:33 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:43:33 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a0a1337cd0dfbe338ced5a75b10355d8cb765d06916a4e7ca5f45b51cbc3e2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:43:33 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a0a1337cd0dfbe338ced5a75b10355d8cb765d06916a4e7ca5f45b51cbc3e2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:43:33 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a0a1337cd0dfbe338ced5a75b10355d8cb765d06916a4e7ca5f45b51cbc3e2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:43:33 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a0a1337cd0dfbe338ced5a75b10355d8cb765d06916a4e7ca5f45b51cbc3e2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:43:33 np0005545273 podman[252970]: 2025-12-04 10:43:33.159195458 +0000 UTC m=+0.145583193 container init 25d035601b48b0784df806fcef5e0367cd33adf9abb156c601efae0c9258e082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec  4 05:43:33 np0005545273 podman[252970]: 2025-12-04 10:43:33.167250658 +0000 UTC m=+0.153638363 container start 25d035601b48b0784df806fcef5e0367cd33adf9abb156c601efae0c9258e082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_darwin, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  4 05:43:33 np0005545273 podman[252970]: 2025-12-04 10:43:33.170597131 +0000 UTC m=+0.156984836 container attach 25d035601b48b0784df806fcef5e0367cd33adf9abb156c601efae0c9258e082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_darwin, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:43:33 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1003: 321 pgs: 321 active+clean; 50 MiB data, 240 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 61 KiB/s wr, 8 op/s
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]: {
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:    "0": [
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:        {
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            "devices": [
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "/dev/loop3"
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            ],
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            "lv_name": "ceph_lv0",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            "lv_size": "21470642176",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            "name": "ceph_lv0",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            "tags": {
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.cluster_name": "ceph",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.crush_device_class": "",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.encrypted": "0",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.objectstore": "bluestore",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.osd_id": "0",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.type": "block",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.vdo": "0",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.with_tpm": "0"
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            },
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            "type": "block",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            "vg_name": "ceph_vg0"
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:        }
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:    ],
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:    "1": [
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:        {
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            "devices": [
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "/dev/loop4"
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            ],
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            "lv_name": "ceph_lv1",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            "lv_size": "21470642176",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            "name": "ceph_lv1",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            "tags": {
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.cluster_name": "ceph",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.crush_device_class": "",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.encrypted": "0",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.objectstore": "bluestore",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.osd_id": "1",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.type": "block",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.vdo": "0",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.with_tpm": "0"
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            },
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            "type": "block",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            "vg_name": "ceph_vg1"
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:        }
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:    ],
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:    "2": [
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:        {
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            "devices": [
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "/dev/loop5"
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            ],
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            "lv_name": "ceph_lv2",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            "lv_size": "21470642176",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            "name": "ceph_lv2",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            "tags": {
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.cluster_name": "ceph",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.crush_device_class": "",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.encrypted": "0",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.objectstore": "bluestore",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.osd_id": "2",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.type": "block",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.vdo": "0",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:                "ceph.with_tpm": "0"
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            },
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            "type": "block",
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:            "vg_name": "ceph_vg2"
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:        }
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]:    ]
Dec  4 05:43:33 np0005545273 jovial_darwin[252986]: }
Dec  4 05:43:33 np0005545273 systemd[1]: libpod-25d035601b48b0784df806fcef5e0367cd33adf9abb156c601efae0c9258e082.scope: Deactivated successfully.
Dec  4 05:43:33 np0005545273 podman[252970]: 2025-12-04 10:43:33.475008854 +0000 UTC m=+0.461396569 container died 25d035601b48b0784df806fcef5e0367cd33adf9abb156c601efae0c9258e082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_darwin, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec  4 05:43:33 np0005545273 systemd[1]: var-lib-containers-storage-overlay-2a0a1337cd0dfbe338ced5a75b10355d8cb765d06916a4e7ca5f45b51cbc3e2e-merged.mount: Deactivated successfully.
Dec  4 05:43:33 np0005545273 podman[252970]: 2025-12-04 10:43:33.521159469 +0000 UTC m=+0.507547164 container remove 25d035601b48b0784df806fcef5e0367cd33adf9abb156c601efae0c9258e082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec  4 05:43:33 np0005545273 systemd[1]: libpod-conmon-25d035601b48b0784df806fcef5e0367cd33adf9abb156c601efae0c9258e082.scope: Deactivated successfully.
Dec  4 05:43:33 np0005545273 podman[253068]: 2025-12-04 10:43:33.968076918 +0000 UTC m=+0.039102230 container create eeb6fb701d6e82a65b3a0641cb7b528573506e068104e928fe5b6606fd385327 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_kirch, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  4 05:43:33 np0005545273 systemd[1]: Started libpod-conmon-eeb6fb701d6e82a65b3a0641cb7b528573506e068104e928fe5b6606fd385327.scope.
Dec  4 05:43:34 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:43:34 np0005545273 podman[253068]: 2025-12-04 10:43:34.028454097 +0000 UTC m=+0.099479439 container init eeb6fb701d6e82a65b3a0641cb7b528573506e068104e928fe5b6606fd385327 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_kirch, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec  4 05:43:34 np0005545273 podman[253068]: 2025-12-04 10:43:34.034664311 +0000 UTC m=+0.105689623 container start eeb6fb701d6e82a65b3a0641cb7b528573506e068104e928fe5b6606fd385327 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_kirch, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  4 05:43:34 np0005545273 podman[253068]: 2025-12-04 10:43:34.038916916 +0000 UTC m=+0.109942228 container attach eeb6fb701d6e82a65b3a0641cb7b528573506e068104e928fe5b6606fd385327 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_kirch, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:43:34 np0005545273 optimistic_kirch[253086]: 167 167
Dec  4 05:43:34 np0005545273 systemd[1]: libpod-eeb6fb701d6e82a65b3a0641cb7b528573506e068104e928fe5b6606fd385327.scope: Deactivated successfully.
Dec  4 05:43:34 np0005545273 podman[253068]: 2025-12-04 10:43:34.041529902 +0000 UTC m=+0.112555224 container died eeb6fb701d6e82a65b3a0641cb7b528573506e068104e928fe5b6606fd385327 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_kirch, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  4 05:43:34 np0005545273 podman[253068]: 2025-12-04 10:43:33.950986565 +0000 UTC m=+0.022011897 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:43:34 np0005545273 systemd[1]: var-lib-containers-storage-overlay-11550c334abdca8a354a3398de999e06b7e2d0be81b6a3197590b1494ab667b4-merged.mount: Deactivated successfully.
Dec  4 05:43:34 np0005545273 podman[253082]: 2025-12-04 10:43:34.078086328 +0000 UTC m=+0.075697459 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Dec  4 05:43:34 np0005545273 podman[253068]: 2025-12-04 10:43:34.084703283 +0000 UTC m=+0.155728595 container remove eeb6fb701d6e82a65b3a0641cb7b528573506e068104e928fe5b6606fd385327 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:43:34 np0005545273 systemd[1]: libpod-conmon-eeb6fb701d6e82a65b3a0641cb7b528573506e068104e928fe5b6606fd385327.scope: Deactivated successfully.
Dec  4 05:43:34 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:43:34 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, vol_name:cephfs) < ""
Dec  4 05:43:34 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e'.
Dec  4 05:43:34 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/.meta.tmp'
Dec  4 05:43:34 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/.meta.tmp' to config b'/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/.meta'
Dec  4 05:43:34 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, vol_name:cephfs) < ""
Dec  4 05:43:34 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "format": "json"}]: dispatch
Dec  4 05:43:34 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, vol_name:cephfs) < ""
Dec  4 05:43:34 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, vol_name:cephfs) < ""
Dec  4 05:43:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:43:34 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:43:34 np0005545273 podman[253128]: 2025-12-04 10:43:34.243009291 +0000 UTC m=+0.043322096 container create af017a0e54e7b8d8144321e4e5cbd5abd6e36e86f66ec5ac1b6a79163a783f3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  4 05:43:34 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "cbe47551-19d7-448d-b120-9e300aa25c97", "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:43:34 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:cbe47551-19d7-448d-b120-9e300aa25c97, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Dec  4 05:43:34 np0005545273 systemd[1]: Started libpod-conmon-af017a0e54e7b8d8144321e4e5cbd5abd6e36e86f66ec5ac1b6a79163a783f3b.scope.
Dec  4 05:43:34 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:cbe47551-19d7-448d-b120-9e300aa25c97, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Dec  4 05:43:34 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:43:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:43:34 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b69bb9445c08b675f0cc5855d9a6ee3922cdf97ec7d246a884f787b230d82311/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:43:34 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b69bb9445c08b675f0cc5855d9a6ee3922cdf97ec7d246a884f787b230d82311/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:43:34 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b69bb9445c08b675f0cc5855d9a6ee3922cdf97ec7d246a884f787b230d82311/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:43:34 np0005545273 podman[253128]: 2025-12-04 10:43:34.226179643 +0000 UTC m=+0.026492478 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:43:34 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b69bb9445c08b675f0cc5855d9a6ee3922cdf97ec7d246a884f787b230d82311/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:43:34 np0005545273 podman[253128]: 2025-12-04 10:43:34.331842855 +0000 UTC m=+0.132155690 container init af017a0e54e7b8d8144321e4e5cbd5abd6e36e86f66ec5ac1b6a79163a783f3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:43:34 np0005545273 podman[253128]: 2025-12-04 10:43:34.33969625 +0000 UTC m=+0.140009065 container start af017a0e54e7b8d8144321e4e5cbd5abd6e36e86f66ec5ac1b6a79163a783f3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_sutherland, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  4 05:43:34 np0005545273 podman[253128]: 2025-12-04 10:43:34.344172011 +0000 UTC m=+0.144484836 container attach af017a0e54e7b8d8144321e4e5cbd5abd6e36e86f66ec5ac1b6a79163a783f3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec  4 05:43:34 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "cbe47551-19d7-448d-b120-9e300aa25c97", "force": true, "format": "json"}]: dispatch
Dec  4 05:43:34 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:cbe47551-19d7-448d-b120-9e300aa25c97, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Dec  4 05:43:34 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:cbe47551-19d7-448d-b120-9e300aa25c97, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Dec  4 05:43:34 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "auth_id": "eve49", "tenant_id": "7e0c9a3966b443c7bbb289ba33849550", "access_level": "rw", "format": "json"}]: dispatch
Dec  4 05:43:34 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, tenant_id:7e0c9a3966b443c7bbb289ba33849550, vol_name:cephfs) < ""
Dec  4 05:43:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0)
Dec  4 05:43:34 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Dec  4 05:43:34 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID eve49 with tenant 7e0c9a3966b443c7bbb289ba33849550
Dec  4 05:43:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_dec20aa6-db73-446c-9d5e-8597f7adaaa8", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:43:34 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_dec20aa6-db73-446c-9d5e-8597f7adaaa8", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:43:34 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_dec20aa6-db73-446c-9d5e-8597f7adaaa8", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:43:34 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, tenant_id:7e0c9a3966b443c7bbb289ba33849550, vol_name:cephfs) < ""
Dec  4 05:43:35 np0005545273 lvm[253223]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:43:35 np0005545273 lvm[253224]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:43:35 np0005545273 lvm[253223]: VG ceph_vg0 finished
Dec  4 05:43:35 np0005545273 lvm[253224]: VG ceph_vg1 finished
Dec  4 05:43:35 np0005545273 lvm[253226]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:43:35 np0005545273 lvm[253226]: VG ceph_vg2 finished
Dec  4 05:43:35 np0005545273 awesome_sutherland[253145]: {}
Dec  4 05:43:35 np0005545273 systemd[1]: libpod-af017a0e54e7b8d8144321e4e5cbd5abd6e36e86f66ec5ac1b6a79163a783f3b.scope: Deactivated successfully.
Dec  4 05:43:35 np0005545273 systemd[1]: libpod-af017a0e54e7b8d8144321e4e5cbd5abd6e36e86f66ec5ac1b6a79163a783f3b.scope: Consumed 1.433s CPU time.
Dec  4 05:43:35 np0005545273 podman[253128]: 2025-12-04 10:43:35.206187952 +0000 UTC m=+1.006500797 container died af017a0e54e7b8d8144321e4e5cbd5abd6e36e86f66ec5ac1b6a79163a783f3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  4 05:43:35 np0005545273 systemd[1]: var-lib-containers-storage-overlay-b69bb9445c08b675f0cc5855d9a6ee3922cdf97ec7d246a884f787b230d82311-merged.mount: Deactivated successfully.
Dec  4 05:43:35 np0005545273 podman[253128]: 2025-12-04 10:43:35.256994782 +0000 UTC m=+1.057307597 container remove af017a0e54e7b8d8144321e4e5cbd5abd6e36e86f66ec5ac1b6a79163a783f3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:43:35 np0005545273 systemd[1]: libpod-conmon-af017a0e54e7b8d8144321e4e5cbd5abd6e36e86f66ec5ac1b6a79163a783f3b.scope: Deactivated successfully.
Dec  4 05:43:35 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:43:35 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:43:35 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:43:35 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:43:35 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1004: 321 pgs: 321 active+clean; 50 MiB data, 240 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 61 KiB/s wr, 8 op/s
Dec  4 05:43:35 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "a007c67c-4b9e-45ce-9f08-f1379750eb54", "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:43:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:a007c67c-4b9e-45ce-9f08-f1379750eb54, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Dec  4 05:43:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:a007c67c-4b9e-45ce-9f08-f1379750eb54, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Dec  4 05:43:35 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Dec  4 05:43:35 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_dec20aa6-db73-446c-9d5e-8597f7adaaa8", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:43:35 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_dec20aa6-db73-446c-9d5e-8597f7adaaa8", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:43:35 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:43:35 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:43:36 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b590878f-f5a4-4c4c-97ac-af9c32c4449c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:43:36 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b590878f-f5a4-4c4c-97ac-af9c32c4449c, vol_name:cephfs) < ""
Dec  4 05:43:36 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/b590878f-f5a4-4c4c-97ac-af9c32c4449c/5260771a-3d40-48ec-b1ac-44fca9eeb9bd'.
Dec  4 05:43:36 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b590878f-f5a4-4c4c-97ac-af9c32c4449c/.meta.tmp'
Dec  4 05:43:36 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b590878f-f5a4-4c4c-97ac-af9c32c4449c/.meta.tmp' to config b'/volumes/_nogroup/b590878f-f5a4-4c4c-97ac-af9c32c4449c/.meta'
Dec  4 05:43:36 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b590878f-f5a4-4c4c-97ac-af9c32c4449c, vol_name:cephfs) < ""
Dec  4 05:43:36 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b590878f-f5a4-4c4c-97ac-af9c32c4449c", "format": "json"}]: dispatch
Dec  4 05:43:36 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b590878f-f5a4-4c4c-97ac-af9c32c4449c, vol_name:cephfs) < ""
Dec  4 05:43:36 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b590878f-f5a4-4c4c-97ac-af9c32c4449c, vol_name:cephfs) < ""
Dec  4 05:43:36 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:43:36 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:43:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:43:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:43:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:43:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:43:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:43:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:43:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:43:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:43:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:43:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:43:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006671164381540543 of space, bias 1.0, pg target 0.2001349314462163 quantized to 32 (current 32)
Dec  4 05:43:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:43:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0001505944399253973 of space, bias 4.0, pg target 0.18071332791047676 quantized to 16 (current 32)
Dec  4 05:43:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:43:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  4 05:43:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:43:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:43:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:43:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 05:43:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:43:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:43:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:43:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 05:43:37 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1005: 321 pgs: 321 active+clean; 50 MiB data, 240 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 61 KiB/s wr, 8 op/s
Dec  4 05:43:38 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "auth_id": "eve48", "tenant_id": "7e0c9a3966b443c7bbb289ba33849550", "access_level": "rw", "format": "json"}]: dispatch
Dec  4 05:43:38 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, tenant_id:7e0c9a3966b443c7bbb289ba33849550, vol_name:cephfs) < ""
Dec  4 05:43:38 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0)
Dec  4 05:43:38 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Dec  4 05:43:38 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID eve48 with tenant 7e0c9a3966b443c7bbb289ba33849550
Dec  4 05:43:38 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_dec20aa6-db73-446c-9d5e-8597f7adaaa8", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:43:38 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_dec20aa6-db73-446c-9d5e-8597f7adaaa8", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:43:38 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_dec20aa6-db73-446c-9d5e-8597f7adaaa8", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:43:38 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, tenant_id:7e0c9a3966b443c7bbb289ba33849550, vol_name:cephfs) < ""
Dec  4 05:43:38 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "a007c67c-4b9e-45ce-9f08-f1379750eb54", "force": true, "format": "json"}]: dispatch
Dec  4 05:43:38 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:a007c67c-4b9e-45ce-9f08-f1379750eb54, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Dec  4 05:43:38 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:a007c67c-4b9e-45ce-9f08-f1379750eb54, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Dec  4 05:43:39 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "376dc4db-618b-4da3-9877-daf0c0185878", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:43:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:376dc4db-618b-4da3-9877-daf0c0185878, vol_name:cephfs) < ""
Dec  4 05:43:39 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/376dc4db-618b-4da3-9877-daf0c0185878/cc465653-3416-44ed-bf17-d6453499d24f'.
Dec  4 05:43:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:43:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Dec  4 05:43:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Dec  4 05:43:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/376dc4db-618b-4da3-9877-daf0c0185878/.meta.tmp'
Dec  4 05:43:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/376dc4db-618b-4da3-9877-daf0c0185878/.meta.tmp' to config b'/volumes/_nogroup/376dc4db-618b-4da3-9877-daf0c0185878/.meta'
Dec  4 05:43:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:376dc4db-618b-4da3-9877-daf0c0185878, vol_name:cephfs) < ""
Dec  4 05:43:39 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Dec  4 05:43:39 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "376dc4db-618b-4da3-9877-daf0c0185878", "format": "json"}]: dispatch
Dec  4 05:43:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:376dc4db-618b-4da3-9877-daf0c0185878, vol_name:cephfs) < ""
Dec  4 05:43:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:376dc4db-618b-4da3-9877-daf0c0185878, vol_name:cephfs) < ""
Dec  4 05:43:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:43:39 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:43:39 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Dec  4 05:43:39 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_dec20aa6-db73-446c-9d5e-8597f7adaaa8", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:43:39 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_dec20aa6-db73-446c-9d5e-8597f7adaaa8", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:43:39 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1007: 321 pgs: 321 active+clean; 51 MiB data, 240 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 75 KiB/s wr, 10 op/s
Dec  4 05:43:40 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "264f5d7d-c08e-42d9-b63c-55452b2c5eef", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:43:40 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:264f5d7d-c08e-42d9-b63c-55452b2c5eef, vol_name:cephfs) < ""
Dec  4 05:43:40 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/264f5d7d-c08e-42d9-b63c-55452b2c5eef/3f2b93a1-903b-4a14-b59e-d107f0630d40'.
Dec  4 05:43:40 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/264f5d7d-c08e-42d9-b63c-55452b2c5eef/.meta.tmp'
Dec  4 05:43:40 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/264f5d7d-c08e-42d9-b63c-55452b2c5eef/.meta.tmp' to config b'/volumes/_nogroup/264f5d7d-c08e-42d9-b63c-55452b2c5eef/.meta'
Dec  4 05:43:40 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:264f5d7d-c08e-42d9-b63c-55452b2c5eef, vol_name:cephfs) < ""
Dec  4 05:43:40 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "264f5d7d-c08e-42d9-b63c-55452b2c5eef", "format": "json"}]: dispatch
Dec  4 05:43:40 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:264f5d7d-c08e-42d9-b63c-55452b2c5eef, vol_name:cephfs) < ""
Dec  4 05:43:40 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:264f5d7d-c08e-42d9-b63c-55452b2c5eef, vol_name:cephfs) < ""
Dec  4 05:43:40 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:43:40 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:43:40 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "264f5d7d-c08e-42d9-b63c-55452b2c5eef", "auth_id": "Joe", "tenant_id": "d831ca1755a740e7819c02d320ecd2a0", "access_level": "rw", "format": "json"}]: dispatch
Dec  4 05:43:40 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:264f5d7d-c08e-42d9-b63c-55452b2c5eef, tenant_id:d831ca1755a740e7819c02d320ecd2a0, vol_name:cephfs) < ""
Dec  4 05:43:40 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0)
Dec  4 05:43:40 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Dec  4 05:43:40 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID Joe with tenant d831ca1755a740e7819c02d320ecd2a0
Dec  4 05:43:40 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/264f5d7d-c08e-42d9-b63c-55452b2c5eef/3f2b93a1-903b-4a14-b59e-d107f0630d40", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_264f5d7d-c08e-42d9-b63c-55452b2c5eef", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:43:40 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/264f5d7d-c08e-42d9-b63c-55452b2c5eef/3f2b93a1-903b-4a14-b59e-d107f0630d40", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_264f5d7d-c08e-42d9-b63c-55452b2c5eef", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:43:40 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/264f5d7d-c08e-42d9-b63c-55452b2c5eef/3f2b93a1-903b-4a14-b59e-d107f0630d40", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_264f5d7d-c08e-42d9-b63c-55452b2c5eef", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:43:40 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:264f5d7d-c08e-42d9-b63c-55452b2c5eef, tenant_id:d831ca1755a740e7819c02d320ecd2a0, vol_name:cephfs) < ""
Dec  4 05:43:41 np0005545273 nova_compute[244644]: 2025-12-04 10:43:41.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:43:41 np0005545273 nova_compute[244644]: 2025-12-04 10:43:41.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  4 05:43:41 np0005545273 nova_compute[244644]: 2025-12-04 10:43:41.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  4 05:43:41 np0005545273 nova_compute[244644]: 2025-12-04 10:43:41.355 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  4 05:43:41 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Dec  4 05:43:41 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/264f5d7d-c08e-42d9-b63c-55452b2c5eef/3f2b93a1-903b-4a14-b59e-d107f0630d40", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_264f5d7d-c08e-42d9-b63c-55452b2c5eef", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:43:41 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/264f5d7d-c08e-42d9-b63c-55452b2c5eef/3f2b93a1-903b-4a14-b59e-d107f0630d40", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_264f5d7d-c08e-42d9-b63c-55452b2c5eef", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:43:41 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1008: 321 pgs: 321 active+clean; 51 MiB data, 240 MiB used, 60 GiB / 60 GiB avail; 825 B/s rd, 61 KiB/s wr, 8 op/s
Dec  4 05:43:42 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "auth_id": "eve48", "format": "json"}]: dispatch
Dec  4 05:43:42 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, vol_name:cephfs) < ""
Dec  4 05:43:42 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0)
Dec  4 05:43:42 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Dec  4 05:43:42 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve48"} v 0)
Dec  4 05:43:42 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch
Dec  4 05:43:42 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished
Dec  4 05:43:42 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, vol_name:cephfs) < ""
Dec  4 05:43:42 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "auth_id": "eve48", "format": "json"}]: dispatch
Dec  4 05:43:42 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, vol_name:cephfs) < ""
Dec  4 05:43:42 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve48, client_metadata.root=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e
Dec  4 05:43:42 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=eve48,client_metadata.root=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e],prefix=session evict} (starting...)
Dec  4 05:43:42 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:43:42 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, vol_name:cephfs) < ""
Dec  4 05:43:42 np0005545273 nova_compute[244644]: 2025-12-04 10:43:42.337 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:43:42 np0005545273 nova_compute[244644]: 2025-12-04 10:43:42.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:43:42 np0005545273 nova_compute[244644]: 2025-12-04 10:43:42.338 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  4 05:43:42 np0005545273 nova_compute[244644]: 2025-12-04 10:43:42.361 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  4 05:43:42 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Dec  4 05:43:42 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch
Dec  4 05:43:42 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished
Dec  4 05:43:42 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "376dc4db-618b-4da3-9877-daf0c0185878", "format": "json"}]: dispatch
Dec  4 05:43:42 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:376dc4db-618b-4da3-9877-daf0c0185878, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:43:42 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:376dc4db-618b-4da3-9877-daf0c0185878, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:43:42 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:43:42.893+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '376dc4db-618b-4da3-9877-daf0c0185878' of type subvolume
Dec  4 05:43:42 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '376dc4db-618b-4da3-9877-daf0c0185878' of type subvolume
Dec  4 05:43:42 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "376dc4db-618b-4da3-9877-daf0c0185878", "force": true, "format": "json"}]: dispatch
Dec  4 05:43:42 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:376dc4db-618b-4da3-9877-daf0c0185878, vol_name:cephfs) < ""
Dec  4 05:43:42 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/376dc4db-618b-4da3-9877-daf0c0185878'' moved to trashcan
Dec  4 05:43:42 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:43:42 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:376dc4db-618b-4da3-9877-daf0c0185878, vol_name:cephfs) < ""
Dec  4 05:43:43 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1009: 321 pgs: 321 active+clean; 51 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 100 KiB/s wr, 12 op/s
Dec  4 05:43:43 np0005545273 podman[253269]: 2025-12-04 10:43:43.946174559 +0000 UTC m=+0.056507963 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  4 05:43:43 np0005545273 podman[253268]: 2025-12-04 10:43:43.980030398 +0000 UTC m=+0.090364872 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  4 05:43:44 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "187ec7c1-10e2-40cd-bd3e-105526ebd065", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:43:44 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, vol_name:cephfs) < ""
Dec  4 05:43:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:43:44 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/187ec7c1-10e2-40cd-bd3e-105526ebd065/e7c861f3-7356-4619-8b53-f507a1b986c5'.
Dec  4 05:43:44 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/187ec7c1-10e2-40cd-bd3e-105526ebd065/.meta.tmp'
Dec  4 05:43:44 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/187ec7c1-10e2-40cd-bd3e-105526ebd065/.meta.tmp' to config b'/volumes/_nogroup/187ec7c1-10e2-40cd-bd3e-105526ebd065/.meta'
Dec  4 05:43:44 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, vol_name:cephfs) < ""
Dec  4 05:43:44 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "187ec7c1-10e2-40cd-bd3e-105526ebd065", "format": "json"}]: dispatch
Dec  4 05:43:44 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, vol_name:cephfs) < ""
Dec  4 05:43:44 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, vol_name:cephfs) < ""
Dec  4 05:43:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:43:44 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:43:45 np0005545273 nova_compute[244644]: 2025-12-04 10:43:45.361 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:43:45 np0005545273 nova_compute[244644]: 2025-12-04 10:43:45.362 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:43:45 np0005545273 nova_compute[244644]: 2025-12-04 10:43:45.362 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:43:45 np0005545273 nova_compute[244644]: 2025-12-04 10:43:45.362 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:43:45 np0005545273 nova_compute[244644]: 2025-12-04 10:43:45.414 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:43:45 np0005545273 nova_compute[244644]: 2025-12-04 10:43:45.415 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:43:45 np0005545273 nova_compute[244644]: 2025-12-04 10:43:45.415 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:43:45 np0005545273 nova_compute[244644]: 2025-12-04 10:43:45.415 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  4 05:43:45 np0005545273 nova_compute[244644]: 2025-12-04 10:43:45.416 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:43:45 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1010: 321 pgs: 321 active+clean; 51 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 100 KiB/s wr, 12 op/s
Dec  4 05:43:45 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "auth_id": "eve47", "tenant_id": "7e0c9a3966b443c7bbb289ba33849550", "access_level": "rw", "format": "json"}]: dispatch
Dec  4 05:43:45 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, tenant_id:7e0c9a3966b443c7bbb289ba33849550, vol_name:cephfs) < ""
Dec  4 05:43:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0)
Dec  4 05:43:45 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Dec  4 05:43:45 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID eve47 with tenant 7e0c9a3966b443c7bbb289ba33849550
Dec  4 05:43:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_dec20aa6-db73-446c-9d5e-8597f7adaaa8", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:43:45 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_dec20aa6-db73-446c-9d5e-8597f7adaaa8", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:43:45 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_dec20aa6-db73-446c-9d5e-8597f7adaaa8", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:43:45 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, tenant_id:7e0c9a3966b443c7bbb289ba33849550, vol_name:cephfs) < ""
Dec  4 05:43:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:43:45 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3338810546' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:43:45 np0005545273 nova_compute[244644]: 2025-12-04 10:43:45.973 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:43:46 np0005545273 nova_compute[244644]: 2025-12-04 10:43:46.178 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  4 05:43:46 np0005545273 nova_compute[244644]: 2025-12-04 10:43:46.180 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5111MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  4 05:43:46 np0005545273 nova_compute[244644]: 2025-12-04 10:43:46.180 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:43:46 np0005545273 nova_compute[244644]: 2025-12-04 10:43:46.181 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:43:46 np0005545273 nova_compute[244644]: 2025-12-04 10:43:46.400 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  4 05:43:46 np0005545273 nova_compute[244644]: 2025-12-04 10:43:46.401 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  4 05:43:46 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Dec  4 05:43:46 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_dec20aa6-db73-446c-9d5e-8597f7adaaa8", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:43:46 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_dec20aa6-db73-446c-9d5e-8597f7adaaa8", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:43:46 np0005545273 nova_compute[244644]: 2025-12-04 10:43:46.488 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Refreshing inventories for resource provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  4 05:43:46 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:43:46 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889, vol_name:cephfs) < ""
Dec  4 05:43:46 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889/81816eed-4b43-43d9-9a2c-a8df9562f2c7'.
Dec  4 05:43:46 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889/.meta.tmp'
Dec  4 05:43:46 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889/.meta.tmp' to config b'/volumes/_nogroup/66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889/.meta'
Dec  4 05:43:46 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889, vol_name:cephfs) < ""
Dec  4 05:43:46 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889", "format": "json"}]: dispatch
Dec  4 05:43:46 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889, vol_name:cephfs) < ""
Dec  4 05:43:46 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889, vol_name:cephfs) < ""
Dec  4 05:43:46 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:43:46 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:43:46 np0005545273 nova_compute[244644]: 2025-12-04 10:43:46.549 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Updating ProviderTree inventory for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  4 05:43:46 np0005545273 nova_compute[244644]: 2025-12-04 10:43:46.549 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Updating inventory in ProviderTree for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  4 05:43:46 np0005545273 nova_compute[244644]: 2025-12-04 10:43:46.580 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Refreshing aggregate associations for resource provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  4 05:43:46 np0005545273 nova_compute[244644]: 2025-12-04 10:43:46.602 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Refreshing trait associations for resource provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f, traits: COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE,HW_CPU_X86_ABM,HW_CPU_X86_F16C,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AVX2,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_FMA3,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  4 05:43:46 np0005545273 nova_compute[244644]: 2025-12-04 10:43:46.619 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:43:47 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:43:47 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2632855522' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:43:47 np0005545273 nova_compute[244644]: 2025-12-04 10:43:47.138 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:43:47 np0005545273 nova_compute[244644]: 2025-12-04 10:43:47.144 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  4 05:43:47 np0005545273 nova_compute[244644]: 2025-12-04 10:43:47.158 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  4 05:43:47 np0005545273 nova_compute[244644]: 2025-12-04 10:43:47.159 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  4 05:43:47 np0005545273 nova_compute[244644]: 2025-12-04 10:43:47.160 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.979s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:43:47 np0005545273 nova_compute[244644]: 2025-12-04 10:43:47.160 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:43:47 np0005545273 nova_compute[244644]: 2025-12-04 10:43:47.160 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  4 05:43:47 np0005545273 nova_compute[244644]: 2025-12-04 10:43:47.170 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:43:47 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1011: 321 pgs: 321 active+clean; 51 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 100 KiB/s wr, 12 op/s
Dec  4 05:43:47 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "187ec7c1-10e2-40cd-bd3e-105526ebd065", "auth_id": "Joe", "tenant_id": "c2e0964e5703431eab30fd7c235961ae", "access_level": "rw", "format": "json"}]: dispatch
Dec  4 05:43:47 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, tenant_id:c2e0964e5703431eab30fd7c235961ae, vol_name:cephfs) < ""
Dec  4 05:43:47 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0)
Dec  4 05:43:47 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Dec  4 05:43:47 np0005545273 ceph-mgr[75651]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: Joe is already in use
Dec  4 05:43:47 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, tenant_id:c2e0964e5703431eab30fd7c235961ae, vol_name:cephfs) < ""
Dec  4 05:43:47 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:43:47.961+0000 7f8423c95640 -1 mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use
Dec  4 05:43:47 np0005545273 ceph-mgr[75651]: mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use
Dec  4 05:43:48 np0005545273 nova_compute[244644]: 2025-12-04 10:43:48.150 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:43:48 np0005545273 nova_compute[244644]: 2025-12-04 10:43:48.151 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:43:48 np0005545273 nova_compute[244644]: 2025-12-04 10:43:48.181 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:43:48 np0005545273 nova_compute[244644]: 2025-12-04 10:43:48.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:43:48 np0005545273 nova_compute[244644]: 2025-12-04 10:43:48.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  4 05:43:48 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Dec  4 05:43:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:43:49 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1012: 321 pgs: 321 active+clean; 52 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 303 B/s rd, 106 KiB/s wr, 13 op/s
Dec  4 05:43:49 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "auth_id": "eve47", "format": "json"}]: dispatch
Dec  4 05:43:49 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, vol_name:cephfs) < ""
Dec  4 05:43:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0)
Dec  4 05:43:49 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Dec  4 05:43:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve47"} v 0)
Dec  4 05:43:49 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch
Dec  4 05:43:50 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished
Dec  4 05:43:50 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, vol_name:cephfs) < ""
Dec  4 05:43:50 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "auth_id": "eve47", "format": "json"}]: dispatch
Dec  4 05:43:50 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, vol_name:cephfs) < ""
Dec  4 05:43:50 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve47, client_metadata.root=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e
Dec  4 05:43:50 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=eve47,client_metadata.root=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e],prefix=session evict} (starting...)
Dec  4 05:43:50 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:43:50 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, vol_name:cephfs) < ""
Dec  4 05:43:50 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889", "format": "json"}]: dispatch
Dec  4 05:43:50 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:43:50 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:43:50 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:43:50.153+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889' of type subvolume
Dec  4 05:43:50 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889' of type subvolume
Dec  4 05:43:50 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889", "force": true, "format": "json"}]: dispatch
Dec  4 05:43:50 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889, vol_name:cephfs) < ""
Dec  4 05:43:50 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889'' moved to trashcan
Dec  4 05:43:50 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:43:50 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:66a8cb6b-a9ba-4b95-98a0-d3ae1eac9889, vol_name:cephfs) < ""
Dec  4 05:43:50 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Dec  4 05:43:50 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch
Dec  4 05:43:50 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished
Dec  4 05:43:51 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1013: 321 pgs: 321 active+clean; 52 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 90 KiB/s wr, 11 op/s
Dec  4 05:43:52 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "187ec7c1-10e2-40cd-bd3e-105526ebd065", "auth_id": "tempest-cephx-id-1322111508", "tenant_id": "c2e0964e5703431eab30fd7c235961ae", "access_level": "rw", "format": "json"}]: dispatch
Dec  4 05:43:52 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1322111508, format:json, prefix:fs subvolume authorize, sub_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, tenant_id:c2e0964e5703431eab30fd7c235961ae, vol_name:cephfs) < ""
Dec  4 05:43:52 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1322111508", "format": "json"} v 0)
Dec  4 05:43:52 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1322111508", "format": "json"} : dispatch
Dec  4 05:43:52 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID tempest-cephx-id-1322111508 with tenant c2e0964e5703431eab30fd7c235961ae
Dec  4 05:43:52 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1322111508", "caps": ["mds", "allow rw path=/volumes/_nogroup/187ec7c1-10e2-40cd-bd3e-105526ebd065/e7c861f3-7356-4619-8b53-f507a1b986c5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_187ec7c1-10e2-40cd-bd3e-105526ebd065", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:43:52 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1322111508", "caps": ["mds", "allow rw path=/volumes/_nogroup/187ec7c1-10e2-40cd-bd3e-105526ebd065/e7c861f3-7356-4619-8b53-f507a1b986c5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_187ec7c1-10e2-40cd-bd3e-105526ebd065", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:43:52 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1322111508", "caps": ["mds", "allow rw path=/volumes/_nogroup/187ec7c1-10e2-40cd-bd3e-105526ebd065/e7c861f3-7356-4619-8b53-f507a1b986c5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_187ec7c1-10e2-40cd-bd3e-105526ebd065", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:43:52 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1322111508, format:json, prefix:fs subvolume authorize, sub_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, tenant_id:c2e0964e5703431eab30fd7c235961ae, vol_name:cephfs) < ""
Dec  4 05:43:52 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1322111508", "format": "json"} : dispatch
Dec  4 05:43:52 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1322111508", "caps": ["mds", "allow rw path=/volumes/_nogroup/187ec7c1-10e2-40cd-bd3e-105526ebd065/e7c861f3-7356-4619-8b53-f507a1b986c5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_187ec7c1-10e2-40cd-bd3e-105526ebd065", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:43:52 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1322111508", "caps": ["mds", "allow rw path=/volumes/_nogroup/187ec7c1-10e2-40cd-bd3e-105526ebd065/e7c861f3-7356-4619-8b53-f507a1b986c5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_187ec7c1-10e2-40cd-bd3e-105526ebd065", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:43:52 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:43:52 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:43:52 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/92c8c8fb-87d8-4b63-b6fb-001ecf8b1670'.
Dec  4 05:43:52 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp'
Dec  4 05:43:52 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp' to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta'
Dec  4 05:43:52 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:43:52 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "format": "json"}]: dispatch
Dec  4 05:43:52 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:43:52 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:43:52 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:43:52 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:43:53 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1014: 321 pgs: 321 active+clean; 52 MiB data, 242 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 128 KiB/s wr, 15 op/s
Dec  4 05:43:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:43:54 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "auth_id": "eve49", "format": "json"}]: dispatch
Dec  4 05:43:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, vol_name:cephfs) < ""
Dec  4 05:43:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0)
Dec  4 05:43:54 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Dec  4 05:43:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve49"} v 0)
Dec  4 05:43:54 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch
Dec  4 05:43:54 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished
Dec  4 05:43:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, vol_name:cephfs) < ""
Dec  4 05:43:54 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "auth_id": "eve49", "format": "json"}]: dispatch
Dec  4 05:43:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, vol_name:cephfs) < ""
Dec  4 05:43:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve49, client_metadata.root=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e
Dec  4 05:43:54 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=eve49,client_metadata.root=/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8/051ecf4a-26e7-46f9-b39b-fbfcbd86270e],prefix=session evict} (starting...)
Dec  4 05:43:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:43:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, vol_name:cephfs) < ""
Dec  4 05:43:54 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Dec  4 05:43:54 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch
Dec  4 05:43:54 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished
Dec  4 05:43:54 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "format": "json"}]: dispatch
Dec  4 05:43:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:43:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:43:54 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:43:54.613+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dec20aa6-db73-446c-9d5e-8597f7adaaa8' of type subvolume
Dec  4 05:43:54 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dec20aa6-db73-446c-9d5e-8597f7adaaa8' of type subvolume
Dec  4 05:43:54 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "dec20aa6-db73-446c-9d5e-8597f7adaaa8", "force": true, "format": "json"}]: dispatch
Dec  4 05:43:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, vol_name:cephfs) < ""
Dec  4 05:43:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/dec20aa6-db73-446c-9d5e-8597f7adaaa8'' moved to trashcan
Dec  4 05:43:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:43:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dec20aa6-db73-446c-9d5e-8597f7adaaa8, vol_name:cephfs) < ""
Dec  4 05:43:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:43:54.910 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:43:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:43:54.910 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:43:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:43:54.910 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:43:55 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1015: 321 pgs: 321 active+clean; 52 MiB data, 242 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 75 KiB/s wr, 9 op/s
Dec  4 05:43:55 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "187ec7c1-10e2-40cd-bd3e-105526ebd065", "auth_id": "Joe", "format": "json"}]: dispatch
Dec  4 05:43:55 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, vol_name:cephfs) < ""
Dec  4 05:43:55 np0005545273 ceph-mgr[75651]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'Joe' for subvolume '187ec7c1-10e2-40cd-bd3e-105526ebd065'
Dec  4 05:43:55 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, vol_name:cephfs) < ""
Dec  4 05:43:55 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "187ec7c1-10e2-40cd-bd3e-105526ebd065", "auth_id": "Joe", "format": "json"}]: dispatch
Dec  4 05:43:55 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, vol_name:cephfs) < ""
Dec  4 05:43:55 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/187ec7c1-10e2-40cd-bd3e-105526ebd065/e7c861f3-7356-4619-8b53-f507a1b986c5
Dec  4 05:43:55 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=Joe,client_metadata.root=/volumes/_nogroup/187ec7c1-10e2-40cd-bd3e-105526ebd065/e7c861f3-7356-4619-8b53-f507a1b986c5],prefix=session evict} (starting...)
Dec  4 05:43:55 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:43:55 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, vol_name:cephfs) < ""
Dec  4 05:43:56 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "342109e9-178b-44e5-bf68-2605580aac2c", "format": "json"}]: dispatch
Dec  4 05:43:56 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:342109e9-178b-44e5-bf68-2605580aac2c, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:43:56 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:342109e9-178b-44e5-bf68-2605580aac2c, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:43:57 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1016: 321 pgs: 321 active+clean; 52 MiB data, 242 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 75 KiB/s wr, 9 op/s
Dec  4 05:43:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:43:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:43:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:43:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:43:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:43:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:43:59 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "187ec7c1-10e2-40cd-bd3e-105526ebd065", "auth_id": "tempest-cephx-id-1322111508", "format": "json"}]: dispatch
Dec  4 05:43:59 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1322111508, format:json, prefix:fs subvolume deauthorize, sub_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, vol_name:cephfs) < ""
Dec  4 05:43:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1322111508", "format": "json"} v 0)
Dec  4 05:43:59 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1322111508", "format": "json"} : dispatch
Dec  4 05:43:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1322111508"} v 0)
Dec  4 05:43:59 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1322111508"} : dispatch
Dec  4 05:43:59 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1322111508"}]': finished
Dec  4 05:43:59 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1322111508, format:json, prefix:fs subvolume deauthorize, sub_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, vol_name:cephfs) < ""
Dec  4 05:43:59 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "187ec7c1-10e2-40cd-bd3e-105526ebd065", "auth_id": "tempest-cephx-id-1322111508", "format": "json"}]: dispatch
Dec  4 05:43:59 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1322111508, format:json, prefix:fs subvolume evict, sub_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, vol_name:cephfs) < ""
Dec  4 05:43:59 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1322111508, client_metadata.root=/volumes/_nogroup/187ec7c1-10e2-40cd-bd3e-105526ebd065/e7c861f3-7356-4619-8b53-f507a1b986c5
Dec  4 05:43:59 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=tempest-cephx-id-1322111508,client_metadata.root=/volumes/_nogroup/187ec7c1-10e2-40cd-bd3e-105526ebd065/e7c861f3-7356-4619-8b53-f507a1b986c5],prefix=session evict} (starting...)
Dec  4 05:43:59 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:43:59 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1322111508, format:json, prefix:fs subvolume evict, sub_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, vol_name:cephfs) < ""
Dec  4 05:43:59 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1322111508", "format": "json"} : dispatch
Dec  4 05:43:59 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1322111508"} : dispatch
Dec  4 05:43:59 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1322111508"}]': finished
Dec  4 05:43:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:43:59 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1017: 321 pgs: 321 active+clean; 53 MiB data, 242 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 107 KiB/s wr, 13 op/s
Dec  4 05:44:00 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "a2731753-2916-43b4-aaed-f178c8b9ed48", "format": "json"}]: dispatch
Dec  4 05:44:00 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a2731753-2916-43b4-aaed-f178c8b9ed48, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:44:00 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a2731753-2916-43b4-aaed-f178c8b9ed48, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:44:01 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1018: 321 pgs: 321 active+clean; 53 MiB data, 242 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 71 KiB/s wr, 9 op/s
Dec  4 05:44:01 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "a2731753-2916-43b4-aaed-f178c8b9ed48_6f006936-30e9-49da-a729-8953c011f3e4", "force": true, "format": "json"}]: dispatch
Dec  4 05:44:01 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a2731753-2916-43b4-aaed-f178c8b9ed48_6f006936-30e9-49da-a729-8953c011f3e4, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:44:01 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp'
Dec  4 05:44:01 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp' to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta'
Dec  4 05:44:01 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a2731753-2916-43b4-aaed-f178c8b9ed48_6f006936-30e9-49da-a729-8953c011f3e4, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:44:01 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "a2731753-2916-43b4-aaed-f178c8b9ed48", "force": true, "format": "json"}]: dispatch
Dec  4 05:44:01 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a2731753-2916-43b4-aaed-f178c8b9ed48, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:44:01 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp'
Dec  4 05:44:01 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp' to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta'
Dec  4 05:44:01 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a2731753-2916-43b4-aaed-f178c8b9ed48, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:44:02 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "264f5d7d-c08e-42d9-b63c-55452b2c5eef", "auth_id": "Joe", "format": "json"}]: dispatch
Dec  4 05:44:02 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:264f5d7d-c08e-42d9-b63c-55452b2c5eef, vol_name:cephfs) < ""
Dec  4 05:44:02 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0)
Dec  4 05:44:02 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Dec  4 05:44:02 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.Joe"} v 0)
Dec  4 05:44:02 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch
Dec  4 05:44:02 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished
Dec  4 05:44:02 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:264f5d7d-c08e-42d9-b63c-55452b2c5eef, vol_name:cephfs) < ""
Dec  4 05:44:02 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "264f5d7d-c08e-42d9-b63c-55452b2c5eef", "auth_id": "Joe", "format": "json"}]: dispatch
Dec  4 05:44:02 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:264f5d7d-c08e-42d9-b63c-55452b2c5eef, vol_name:cephfs) < ""
Dec  4 05:44:02 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/264f5d7d-c08e-42d9-b63c-55452b2c5eef/3f2b93a1-903b-4a14-b59e-d107f0630d40
Dec  4 05:44:02 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=Joe,client_metadata.root=/volumes/_nogroup/264f5d7d-c08e-42d9-b63c-55452b2c5eef/3f2b93a1-903b-4a14-b59e-d107f0630d40],prefix=session evict} (starting...)
Dec  4 05:44:02 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:44:02 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:264f5d7d-c08e-42d9-b63c-55452b2c5eef, vol_name:cephfs) < ""
Dec  4 05:44:03 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Dec  4 05:44:03 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch
Dec  4 05:44:03 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished
Dec  4 05:44:03 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1019: 321 pgs: 321 active+clean; 53 MiB data, 242 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 104 KiB/s wr, 12 op/s
Dec  4 05:44:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:44:04 np0005545273 podman[253364]: 2025-12-04 10:44:04.96849321 +0000 UTC m=+0.068123800 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  4 05:44:05 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "eb780175-b147-4b28-95c7-37659a64381a", "format": "json"}]: dispatch
Dec  4 05:44:05 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:eb780175-b147-4b28-95c7-37659a64381a, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:44:05 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:eb780175-b147-4b28-95c7-37659a64381a, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:44:05 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1020: 321 pgs: 321 active+clean; 53 MiB data, 242 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 65 KiB/s wr, 8 op/s
Dec  4 05:44:05 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "485981c2-4d65-44e2-a4c4-d55efb5d64b6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:44:05 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:485981c2-4d65-44e2-a4c4-d55efb5d64b6, vol_name:cephfs) < ""
Dec  4 05:44:05 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/485981c2-4d65-44e2-a4c4-d55efb5d64b6/f61733c8-f0e2-4864-8c3c-0403ba35205c'.
Dec  4 05:44:05 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/485981c2-4d65-44e2-a4c4-d55efb5d64b6/.meta.tmp'
Dec  4 05:44:05 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/485981c2-4d65-44e2-a4c4-d55efb5d64b6/.meta.tmp' to config b'/volumes/_nogroup/485981c2-4d65-44e2-a4c4-d55efb5d64b6/.meta'
Dec  4 05:44:05 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:485981c2-4d65-44e2-a4c4-d55efb5d64b6, vol_name:cephfs) < ""
Dec  4 05:44:05 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "485981c2-4d65-44e2-a4c4-d55efb5d64b6", "format": "json"}]: dispatch
Dec  4 05:44:05 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:485981c2-4d65-44e2-a4c4-d55efb5d64b6, vol_name:cephfs) < ""
Dec  4 05:44:05 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:485981c2-4d65-44e2-a4c4-d55efb5d64b6, vol_name:cephfs) < ""
Dec  4 05:44:05 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:44:05 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:44:06 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  4 05:44:06 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 4923 writes, 22K keys, 4923 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 4923 writes, 4923 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1593 writes, 7340 keys, 1593 commit groups, 1.0 writes per commit group, ingest: 10.02 MB, 0.02 MB/s#012Interval WAL: 1593 writes, 1593 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    104.1      0.24              0.07        12    0.020       0      0       0.0       0.0#012  L6      1/0    7.41 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    142.5    116.9      0.68              0.21        11    0.062     49K   5820       0.0       0.0#012 Sum      1/0    7.41 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2    105.4    113.5      0.92              0.28        23    0.040     49K   5820       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.2    128.8    130.0      0.36              0.13        10    0.036     24K   2613       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    142.5    116.9      0.68              0.21        11    0.062     49K   5820       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    105.6      0.23              0.07        11    0.021       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.024, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.10 GB write, 0.06 MB/s write, 0.09 GB read, 0.05 MB/s read, 0.9 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.04 GB read, 0.08 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56349f89b8d0#2 capacity: 304.00 MB usage: 9.29 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000202 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(566,8.89 MB,2.92528%) FilterBlock(24,143.17 KB,0.0459922%) IndexBlock(24,267.27 KB,0.0858558%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  4 05:44:06 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "b590878f-f5a4-4c4c-97ac-af9c32c4449c", "auth_id": "admin", "tenant_id": "d831ca1755a740e7819c02d320ecd2a0", "access_level": "rw", "format": "json"}]: dispatch
Dec  4 05:44:06 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:b590878f-f5a4-4c4c-97ac-af9c32c4449c, tenant_id:d831ca1755a740e7819c02d320ecd2a0, vol_name:cephfs) < ""
Dec  4 05:44:06 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin", "format": "json"} v 0)
Dec  4 05:44:06 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch
Dec  4 05:44:06 np0005545273 ceph-mgr[75651]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Dec  4 05:44:06 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:b590878f-f5a4-4c4c-97ac-af9c32c4449c, tenant_id:d831ca1755a740e7819c02d320ecd2a0, vol_name:cephfs) < ""
Dec  4 05:44:06 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:44:06.455+0000 7f8423c95640 -1 mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Dec  4 05:44:06 np0005545273 ceph-mgr[75651]: mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Dec  4 05:44:06 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Dec  4 05:44:06 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch
Dec  4 05:44:06 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Dec  4 05:44:06 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Dec  4 05:44:07 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1022: 321 pgs: 321 active+clean; 53 MiB data, 242 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 78 KiB/s wr, 9 op/s
Dec  4 05:44:09 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "485981c2-4d65-44e2-a4c4-d55efb5d64b6", "auth_id": "tempest-cephx-id-792738809", "tenant_id": "3ba683091e694bf1800f8fdcd57277cf", "access_level": "rw", "format": "json"}]: dispatch
Dec  4 05:44:09 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-792738809, format:json, prefix:fs subvolume authorize, sub_name:485981c2-4d65-44e2-a4c4-d55efb5d64b6, tenant_id:3ba683091e694bf1800f8fdcd57277cf, vol_name:cephfs) < ""
Dec  4 05:44:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-792738809", "format": "json"} v 0)
Dec  4 05:44:09 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-792738809", "format": "json"} : dispatch
Dec  4 05:44:09 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID tempest-cephx-id-792738809 with tenant 3ba683091e694bf1800f8fdcd57277cf
Dec  4 05:44:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-792738809", "caps": ["mds", "allow rw path=/volumes/_nogroup/485981c2-4d65-44e2-a4c4-d55efb5d64b6/f61733c8-f0e2-4864-8c3c-0403ba35205c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_485981c2-4d65-44e2-a4c4-d55efb5d64b6", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:44:09 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-792738809", "caps": ["mds", "allow rw path=/volumes/_nogroup/485981c2-4d65-44e2-a4c4-d55efb5d64b6/f61733c8-f0e2-4864-8c3c-0403ba35205c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_485981c2-4d65-44e2-a4c4-d55efb5d64b6", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:44:09 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-792738809", "caps": ["mds", "allow rw path=/volumes/_nogroup/485981c2-4d65-44e2-a4c4-d55efb5d64b6/f61733c8-f0e2-4864-8c3c-0403ba35205c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_485981c2-4d65-44e2-a4c4-d55efb5d64b6", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:44:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:44:09 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-792738809, format:json, prefix:fs subvolume authorize, sub_name:485981c2-4d65-44e2-a4c4-d55efb5d64b6, tenant_id:3ba683091e694bf1800f8fdcd57277cf, vol_name:cephfs) < ""
Dec  4 05:44:09 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-792738809", "format": "json"} : dispatch
Dec  4 05:44:09 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-792738809", "caps": ["mds", "allow rw path=/volumes/_nogroup/485981c2-4d65-44e2-a4c4-d55efb5d64b6/f61733c8-f0e2-4864-8c3c-0403ba35205c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_485981c2-4d65-44e2-a4c4-d55efb5d64b6", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:44:09 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-792738809", "caps": ["mds", "allow rw path=/volumes/_nogroup/485981c2-4d65-44e2-a4c4-d55efb5d64b6/f61733c8-f0e2-4864-8c3c-0403ba35205c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_485981c2-4d65-44e2-a4c4-d55efb5d64b6", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:44:09 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1023: 321 pgs: 321 active+clean; 53 MiB data, 243 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 76 KiB/s wr, 9 op/s
Dec  4 05:44:10 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "eb780175-b147-4b28-95c7-37659a64381a_7448b403-dc03-4f78-8d83-a6c5ad1ab7d7", "force": true, "format": "json"}]: dispatch
Dec  4 05:44:10 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eb780175-b147-4b28-95c7-37659a64381a_7448b403-dc03-4f78-8d83-a6c5ad1ab7d7, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:44:10 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp'
Dec  4 05:44:10 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp' to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta'
Dec  4 05:44:10 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eb780175-b147-4b28-95c7-37659a64381a_7448b403-dc03-4f78-8d83-a6c5ad1ab7d7, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:44:10 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "eb780175-b147-4b28-95c7-37659a64381a", "force": true, "format": "json"}]: dispatch
Dec  4 05:44:10 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eb780175-b147-4b28-95c7-37659a64381a, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:44:10 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp'
Dec  4 05:44:10 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp' to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta'
Dec  4 05:44:10 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eb780175-b147-4b28-95c7-37659a64381a, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:44:10 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "b590878f-f5a4-4c4c-97ac-af9c32c4449c", "auth_id": "david", "tenant_id": "d831ca1755a740e7819c02d320ecd2a0", "access_level": "rw", "format": "json"}]: dispatch
Dec  4 05:44:10 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:b590878f-f5a4-4c4c-97ac-af9c32c4449c, tenant_id:d831ca1755a740e7819c02d320ecd2a0, vol_name:cephfs) < ""
Dec  4 05:44:10 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0)
Dec  4 05:44:10 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Dec  4 05:44:10 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID david with tenant d831ca1755a740e7819c02d320ecd2a0
Dec  4 05:44:10 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/b590878f-f5a4-4c4c-97ac-af9c32c4449c/5260771a-3d40-48ec-b1ac-44fca9eeb9bd", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_b590878f-f5a4-4c4c-97ac-af9c32c4449c", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:44:10 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/b590878f-f5a4-4c4c-97ac-af9c32c4449c/5260771a-3d40-48ec-b1ac-44fca9eeb9bd", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_b590878f-f5a4-4c4c-97ac-af9c32c4449c", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:44:10 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/b590878f-f5a4-4c4c-97ac-af9c32c4449c/5260771a-3d40-48ec-b1ac-44fca9eeb9bd", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_b590878f-f5a4-4c4c-97ac-af9c32c4449c", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:44:10 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:b590878f-f5a4-4c4c-97ac-af9c32c4449c, tenant_id:d831ca1755a740e7819c02d320ecd2a0, vol_name:cephfs) < ""
Dec  4 05:44:10 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Dec  4 05:44:10 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/b590878f-f5a4-4c4c-97ac-af9c32c4449c/5260771a-3d40-48ec-b1ac-44fca9eeb9bd", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_b590878f-f5a4-4c4c-97ac-af9c32c4449c", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:44:10 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/b590878f-f5a4-4c4c-97ac-af9c32c4449c/5260771a-3d40-48ec-b1ac-44fca9eeb9bd", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_b590878f-f5a4-4c4c-97ac-af9c32c4449c", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:44:10 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "485981c2-4d65-44e2-a4c4-d55efb5d64b6", "auth_id": "tempest-cephx-id-792738809", "format": "json"}]: dispatch
Dec  4 05:44:10 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-792738809, format:json, prefix:fs subvolume deauthorize, sub_name:485981c2-4d65-44e2-a4c4-d55efb5d64b6, vol_name:cephfs) < ""
Dec  4 05:44:10 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-792738809", "format": "json"} v 0)
Dec  4 05:44:10 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-792738809", "format": "json"} : dispatch
Dec  4 05:44:10 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-792738809"} v 0)
Dec  4 05:44:10 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-792738809"} : dispatch
Dec  4 05:44:10 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-792738809"}]': finished
Dec  4 05:44:10 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-792738809, format:json, prefix:fs subvolume deauthorize, sub_name:485981c2-4d65-44e2-a4c4-d55efb5d64b6, vol_name:cephfs) < ""
Dec  4 05:44:10 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "485981c2-4d65-44e2-a4c4-d55efb5d64b6", "auth_id": "tempest-cephx-id-792738809", "format": "json"}]: dispatch
Dec  4 05:44:10 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-792738809, format:json, prefix:fs subvolume evict, sub_name:485981c2-4d65-44e2-a4c4-d55efb5d64b6, vol_name:cephfs) < ""
Dec  4 05:44:10 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-792738809, client_metadata.root=/volumes/_nogroup/485981c2-4d65-44e2-a4c4-d55efb5d64b6/f61733c8-f0e2-4864-8c3c-0403ba35205c
Dec  4 05:44:10 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=tempest-cephx-id-792738809,client_metadata.root=/volumes/_nogroup/485981c2-4d65-44e2-a4c4-d55efb5d64b6/f61733c8-f0e2-4864-8c3c-0403ba35205c],prefix=session evict} (starting...)
Dec  4 05:44:10 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:44:10 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-792738809, format:json, prefix:fs subvolume evict, sub_name:485981c2-4d65-44e2-a4c4-d55efb5d64b6, vol_name:cephfs) < ""
Dec  4 05:44:10 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "485981c2-4d65-44e2-a4c4-d55efb5d64b6", "format": "json"}]: dispatch
Dec  4 05:44:10 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:485981c2-4d65-44e2-a4c4-d55efb5d64b6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:44:10 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:485981c2-4d65-44e2-a4c4-d55efb5d64b6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:44:10 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:44:10.817+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '485981c2-4d65-44e2-a4c4-d55efb5d64b6' of type subvolume
Dec  4 05:44:10 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '485981c2-4d65-44e2-a4c4-d55efb5d64b6' of type subvolume
Dec  4 05:44:10 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "485981c2-4d65-44e2-a4c4-d55efb5d64b6", "force": true, "format": "json"}]: dispatch
Dec  4 05:44:10 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:485981c2-4d65-44e2-a4c4-d55efb5d64b6, vol_name:cephfs) < ""
Dec  4 05:44:10 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/485981c2-4d65-44e2-a4c4-d55efb5d64b6'' moved to trashcan
Dec  4 05:44:10 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:44:10 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:485981c2-4d65-44e2-a4c4-d55efb5d64b6, vol_name:cephfs) < ""
Dec  4 05:44:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Dec  4 05:44:11 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-792738809", "format": "json"} : dispatch
Dec  4 05:44:11 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-792738809"} : dispatch
Dec  4 05:44:11 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-792738809"}]': finished
Dec  4 05:44:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Dec  4 05:44:11 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Dec  4 05:44:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  4 05:44:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2897176570' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec  4 05:44:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  4 05:44:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2897176570' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec  4 05:44:11 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1025: 321 pgs: 321 active+clean; 54 MiB data, 243 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 111 KiB/s wr, 11 op/s
Dec  4 05:44:13 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1026: 321 pgs: 321 active+clean; 54 MiB data, 243 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 126 KiB/s wr, 15 op/s
Dec  4 05:44:13 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "d30d966b-f15f-4cb7-9d33-c43bf788f74f", "format": "json"}]: dispatch
Dec  4 05:44:13 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d30d966b-f15f-4cb7-9d33-c43bf788f74f, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:44:13 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d30d966b-f15f-4cb7-9d33-c43bf788f74f, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:44:13 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bd4b4cb5-5fca-4376-8188-5f69aab6c36d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:44:13 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bd4b4cb5-5fca-4376-8188-5f69aab6c36d, vol_name:cephfs) < ""
Dec  4 05:44:13 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/bd4b4cb5-5fca-4376-8188-5f69aab6c36d/06397792-300c-42d8-a6e6-8298e27470f5'.
Dec  4 05:44:13 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bd4b4cb5-5fca-4376-8188-5f69aab6c36d/.meta.tmp'
Dec  4 05:44:13 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bd4b4cb5-5fca-4376-8188-5f69aab6c36d/.meta.tmp' to config b'/volumes/_nogroup/bd4b4cb5-5fca-4376-8188-5f69aab6c36d/.meta'
Dec  4 05:44:13 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bd4b4cb5-5fca-4376-8188-5f69aab6c36d, vol_name:cephfs) < ""
Dec  4 05:44:13 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bd4b4cb5-5fca-4376-8188-5f69aab6c36d", "format": "json"}]: dispatch
Dec  4 05:44:13 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bd4b4cb5-5fca-4376-8188-5f69aab6c36d, vol_name:cephfs) < ""
Dec  4 05:44:13 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bd4b4cb5-5fca-4376-8188-5f69aab6c36d, vol_name:cephfs) < ""
Dec  4 05:44:13 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:44:13 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:44:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:44:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Dec  4 05:44:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Dec  4 05:44:14 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Dec  4 05:44:14 np0005545273 podman[253387]: 2025-12-04 10:44:14.955174981 +0000 UTC m=+0.059760325 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  4 05:44:14 np0005545273 podman[253386]: 2025-12-04 10:44:14.992700922 +0000 UTC m=+0.096705041 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  4 05:44:15 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1028: 321 pgs: 321 active+clean; 54 MiB data, 243 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 126 KiB/s wr, 14 op/s
Dec  4 05:44:17 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "bd4b4cb5-5fca-4376-8188-5f69aab6c36d", "auth_id": "david", "tenant_id": "c2e0964e5703431eab30fd7c235961ae", "access_level": "rw", "format": "json"}]: dispatch
Dec  4 05:44:17 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:bd4b4cb5-5fca-4376-8188-5f69aab6c36d, tenant_id:c2e0964e5703431eab30fd7c235961ae, vol_name:cephfs) < ""
Dec  4 05:44:17 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0)
Dec  4 05:44:17 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Dec  4 05:44:17 np0005545273 ceph-mgr[75651]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: david is already in use
Dec  4 05:44:17 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:bd4b4cb5-5fca-4376-8188-5f69aab6c36d, tenant_id:c2e0964e5703431eab30fd7c235961ae, vol_name:cephfs) < ""
Dec  4 05:44:17 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:44:17.272+0000 7f8423c95640 -1 mgr.server reply reply (1) Operation not permitted auth ID: david is already in use
Dec  4 05:44:17 np0005545273 ceph-mgr[75651]: mgr.server reply reply (1) Operation not permitted auth ID: david is already in use
Dec  4 05:44:17 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Dec  4 05:44:17 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1029: 321 pgs: 321 active+clean; 54 MiB data, 243 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 101 KiB/s wr, 10 op/s
Dec  4 05:44:17 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "d30d966b-f15f-4cb7-9d33-c43bf788f74f_cf03a839-1ccb-4948-9c00-9441d759b0d0", "force": true, "format": "json"}]: dispatch
Dec  4 05:44:17 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d30d966b-f15f-4cb7-9d33-c43bf788f74f_cf03a839-1ccb-4948-9c00-9441d759b0d0, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:44:17 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp'
Dec  4 05:44:17 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp' to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta'
Dec  4 05:44:17 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d30d966b-f15f-4cb7-9d33-c43bf788f74f_cf03a839-1ccb-4948-9c00-9441d759b0d0, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:44:17 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "d30d966b-f15f-4cb7-9d33-c43bf788f74f", "force": true, "format": "json"}]: dispatch
Dec  4 05:44:17 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d30d966b-f15f-4cb7-9d33-c43bf788f74f, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:44:17 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp'
Dec  4 05:44:17 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp' to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta'
Dec  4 05:44:17 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d30d966b-f15f-4cb7-9d33-c43bf788f74f, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:44:19 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:44:19 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1030: 321 pgs: 321 active+clean; 54 MiB data, 243 MiB used, 60 GiB / 60 GiB avail; 507 B/s rd, 35 KiB/s wr, 7 op/s
Dec  4 05:44:20 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "bd4b4cb5-5fca-4376-8188-5f69aab6c36d", "auth_id": "david", "format": "json"}]: dispatch
Dec  4 05:44:20 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:bd4b4cb5-5fca-4376-8188-5f69aab6c36d, vol_name:cephfs) < ""
Dec  4 05:44:20 np0005545273 ceph-mgr[75651]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'david' for subvolume 'bd4b4cb5-5fca-4376-8188-5f69aab6c36d'
Dec  4 05:44:20 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:bd4b4cb5-5fca-4376-8188-5f69aab6c36d, vol_name:cephfs) < ""
Dec  4 05:44:20 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "bd4b4cb5-5fca-4376-8188-5f69aab6c36d", "auth_id": "david", "format": "json"}]: dispatch
Dec  4 05:44:20 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:bd4b4cb5-5fca-4376-8188-5f69aab6c36d, vol_name:cephfs) < ""
Dec  4 05:44:20 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/bd4b4cb5-5fca-4376-8188-5f69aab6c36d/06397792-300c-42d8-a6e6-8298e27470f5
Dec  4 05:44:20 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=david,client_metadata.root=/volumes/_nogroup/bd4b4cb5-5fca-4376-8188-5f69aab6c36d/06397792-300c-42d8-a6e6-8298e27470f5],prefix=session evict} (starting...)
Dec  4 05:44:20 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:44:20 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:bd4b4cb5-5fca-4376-8188-5f69aab6c36d, vol_name:cephfs) < ""
Dec  4 05:44:21 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "e2969563-45cb-4ab6-812a-aad69d2395d4", "format": "json"}]: dispatch
Dec  4 05:44:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:e2969563-45cb-4ab6-812a-aad69d2395d4, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:44:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:e2969563-45cb-4ab6-812a-aad69d2395d4, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:44:21 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1031: 321 pgs: 321 active+clean; 54 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 62 KiB/s wr, 7 op/s
Dec  4 05:44:23 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1032: 321 pgs: 321 active+clean; 54 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 50 KiB/s wr, 5 op/s
Dec  4 05:44:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:44:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Dec  4 05:44:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Dec  4 05:44:24 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Dec  4 05:44:24 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "b590878f-f5a4-4c4c-97ac-af9c32c4449c", "auth_id": "david", "format": "json"}]: dispatch
Dec  4 05:44:24 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:b590878f-f5a4-4c4c-97ac-af9c32c4449c, vol_name:cephfs) < ""
Dec  4 05:44:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0)
Dec  4 05:44:24 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Dec  4 05:44:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.david"} v 0)
Dec  4 05:44:24 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch
Dec  4 05:44:24 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished
Dec  4 05:44:24 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:b590878f-f5a4-4c4c-97ac-af9c32c4449c, vol_name:cephfs) < ""
Dec  4 05:44:24 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "b590878f-f5a4-4c4c-97ac-af9c32c4449c", "auth_id": "david", "format": "json"}]: dispatch
Dec  4 05:44:24 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:b590878f-f5a4-4c4c-97ac-af9c32c4449c, vol_name:cephfs) < ""
Dec  4 05:44:24 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/b590878f-f5a4-4c4c-97ac-af9c32c4449c/5260771a-3d40-48ec-b1ac-44fca9eeb9bd
Dec  4 05:44:24 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=david,client_metadata.root=/volumes/_nogroup/b590878f-f5a4-4c4c-97ac-af9c32c4449c/5260771a-3d40-48ec-b1ac-44fca9eeb9bd],prefix=session evict} (starting...)
Dec  4 05:44:24 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:44:24 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:b590878f-f5a4-4c4c-97ac-af9c32c4449c, vol_name:cephfs) < ""
Dec  4 05:44:25 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Dec  4 05:44:25 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Dec  4 05:44:25 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch
Dec  4 05:44:25 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished
Dec  4 05:44:25 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Dec  4 05:44:25 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Dec  4 05:44:25 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1035: 321 pgs: 321 active+clean; 54 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 42 KiB/s wr, 5 op/s
Dec  4 05:44:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:44:26
Dec  4 05:44:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:44:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:44:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'backups', '.mgr', 'cephfs.cephfs.data', 'volumes', 'images', 'default.rgw.log']
Dec  4 05:44:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:44:27 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "e2969563-45cb-4ab6-812a-aad69d2395d4_234ea353-60b9-4db4-8e91-5714a5b7ce6e", "force": true, "format": "json"}]: dispatch
Dec  4 05:44:27 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e2969563-45cb-4ab6-812a-aad69d2395d4_234ea353-60b9-4db4-8e91-5714a5b7ce6e, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:44:27 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp'
Dec  4 05:44:27 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp' to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta'
Dec  4 05:44:27 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e2969563-45cb-4ab6-812a-aad69d2395d4_234ea353-60b9-4db4-8e91-5714a5b7ce6e, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:44:27 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "e2969563-45cb-4ab6-812a-aad69d2395d4", "force": true, "format": "json"}]: dispatch
Dec  4 05:44:27 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e2969563-45cb-4ab6-812a-aad69d2395d4, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:44:27 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp'
Dec  4 05:44:27 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp' to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta'
Dec  4 05:44:27 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e2969563-45cb-4ab6-812a-aad69d2395d4, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:44:27 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1036: 321 pgs: 321 active+clean; 55 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 76 KiB/s wr, 6 op/s
Dec  4 05:44:27 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:44:27 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:44:27 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba'.
Dec  4 05:44:27 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/.meta.tmp'
Dec  4 05:44:27 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/.meta.tmp' to config b'/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/.meta'
Dec  4 05:44:27 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:44:27 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "format": "json"}]: dispatch
Dec  4 05:44:27 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:44:27 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:44:27 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:44:27 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:44:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:44:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:44:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:44:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:44:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:44:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:44:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:44:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:44:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:44:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:44:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:44:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:44:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:44:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:44:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:44:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:44:28 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bd4b4cb5-5fca-4376-8188-5f69aab6c36d", "format": "json"}]: dispatch
Dec  4 05:44:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:bd4b4cb5-5fca-4376-8188-5f69aab6c36d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:44:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:bd4b4cb5-5fca-4376-8188-5f69aab6c36d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:44:28 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:44:28.844+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bd4b4cb5-5fca-4376-8188-5f69aab6c36d' of type subvolume
Dec  4 05:44:28 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bd4b4cb5-5fca-4376-8188-5f69aab6c36d' of type subvolume
Dec  4 05:44:28 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bd4b4cb5-5fca-4376-8188-5f69aab6c36d", "force": true, "format": "json"}]: dispatch
Dec  4 05:44:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bd4b4cb5-5fca-4376-8188-5f69aab6c36d, vol_name:cephfs) < ""
Dec  4 05:44:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/bd4b4cb5-5fca-4376-8188-5f69aab6c36d'' moved to trashcan
Dec  4 05:44:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:44:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bd4b4cb5-5fca-4376-8188-5f69aab6c36d, vol_name:cephfs) < ""
Dec  4 05:44:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:44:29 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1037: 321 pgs: 321 active+clean; 55 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 35 KiB/s wr, 5 op/s
Dec  4 05:44:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Dec  4 05:44:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Dec  4 05:44:31 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Dec  4 05:44:31 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1039: 321 pgs: 321 active+clean; 55 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 143 B/s rd, 90 KiB/s wr, 7 op/s
Dec  4 05:44:31 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec  4 05:44:31 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:44:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec  4 05:44:31 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec  4 05:44:31 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice with tenant 7df6681d57a74b90abc5310588588b91
Dec  4 05:44:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:44:31 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:44:31 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:44:31 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:44:31 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "7a27b9fe-c0b9-4c84-a258-8ecce5900f59", "format": "json"}]: dispatch
Dec  4 05:44:31 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7a27b9fe-c0b9-4c84-a258-8ecce5900f59, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:44:31 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7a27b9fe-c0b9-4c84-a258-8ecce5900f59, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:44:32 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec  4 05:44:32 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:44:32 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:44:32 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "187ec7c1-10e2-40cd-bd3e-105526ebd065", "format": "json"}]: dispatch
Dec  4 05:44:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:44:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:44:32 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:44:32.606+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '187ec7c1-10e2-40cd-bd3e-105526ebd065' of type subvolume
Dec  4 05:44:32 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '187ec7c1-10e2-40cd-bd3e-105526ebd065' of type subvolume
Dec  4 05:44:32 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "187ec7c1-10e2-40cd-bd3e-105526ebd065", "force": true, "format": "json"}]: dispatch
Dec  4 05:44:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, vol_name:cephfs) < ""
Dec  4 05:44:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/187ec7c1-10e2-40cd-bd3e-105526ebd065'' moved to trashcan
Dec  4 05:44:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:44:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:187ec7c1-10e2-40cd-bd3e-105526ebd065, vol_name:cephfs) < ""
Dec  4 05:44:33 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1040: 321 pgs: 321 active+clean; 55 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 631 B/s rd, 81 KiB/s wr, 9 op/s
Dec  4 05:44:33 np0005545273 nova_compute[244644]: 2025-12-04 10:44:33.748 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:44:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:44:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Dec  4 05:44:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Dec  4 05:44:34 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Dec  4 05:44:35 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec  4 05:44:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:44:35 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec  4 05:44:35 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec  4 05:44:35 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Dec  4 05:44:35 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec  4 05:44:35 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec  4 05:44:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:44:35 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec  4 05:44:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:44:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec  4 05:44:35 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec  4 05:44:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:44:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:44:35 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1042: 321 pgs: 321 active+clean; 55 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 48 KiB/s wr, 7 op/s
Dec  4 05:44:35 np0005545273 podman[253461]: 2025-12-04 10:44:35.570545256 +0000 UTC m=+0.059829486 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd)
Dec  4 05:44:35 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec  4 05:44:35 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec  4 05:44:35 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec  4 05:44:36 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:44:36 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:44:36 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:44:36 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:44:36 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:44:36 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:44:36 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:44:36 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:44:36 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:44:36 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:44:36 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:44:36 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:44:36 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "264f5d7d-c08e-42d9-b63c-55452b2c5eef", "format": "json"}]: dispatch
Dec  4 05:44:36 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:264f5d7d-c08e-42d9-b63c-55452b2c5eef, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:44:36 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:264f5d7d-c08e-42d9-b63c-55452b2c5eef, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:44:36 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:44:36.272+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '264f5d7d-c08e-42d9-b63c-55452b2c5eef' of type subvolume
Dec  4 05:44:36 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '264f5d7d-c08e-42d9-b63c-55452b2c5eef' of type subvolume
Dec  4 05:44:36 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "264f5d7d-c08e-42d9-b63c-55452b2c5eef", "force": true, "format": "json"}]: dispatch
Dec  4 05:44:36 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:264f5d7d-c08e-42d9-b63c-55452b2c5eef, vol_name:cephfs) < ""
Dec  4 05:44:36 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/264f5d7d-c08e-42d9-b63c-55452b2c5eef'' moved to trashcan
Dec  4 05:44:36 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:44:36 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:264f5d7d-c08e-42d9-b63c-55452b2c5eef, vol_name:cephfs) < ""
Dec  4 05:44:36 np0005545273 podman[253600]: 2025-12-04 10:44:36.642384171 +0000 UTC m=+0.042083465 container create 87f1fc0efa68489c70d5a44834ec74e29cd856e5a45f8e2787b7dea77ec12e68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_meninsky, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Dec  4 05:44:36 np0005545273 systemd[1]: Started libpod-conmon-87f1fc0efa68489c70d5a44834ec74e29cd856e5a45f8e2787b7dea77ec12e68.scope.
Dec  4 05:44:36 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:44:36 np0005545273 podman[253600]: 2025-12-04 10:44:36.624060317 +0000 UTC m=+0.023759641 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:44:36 np0005545273 podman[253600]: 2025-12-04 10:44:36.723862734 +0000 UTC m=+0.123562048 container init 87f1fc0efa68489c70d5a44834ec74e29cd856e5a45f8e2787b7dea77ec12e68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:44:36 np0005545273 podman[253600]: 2025-12-04 10:44:36.731924453 +0000 UTC m=+0.131623737 container start 87f1fc0efa68489c70d5a44834ec74e29cd856e5a45f8e2787b7dea77ec12e68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0)
Dec  4 05:44:36 np0005545273 podman[253600]: 2025-12-04 10:44:36.735248636 +0000 UTC m=+0.134947950 container attach 87f1fc0efa68489c70d5a44834ec74e29cd856e5a45f8e2787b7dea77ec12e68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec  4 05:44:36 np0005545273 elastic_meninsky[253616]: 167 167
Dec  4 05:44:36 np0005545273 systemd[1]: libpod-87f1fc0efa68489c70d5a44834ec74e29cd856e5a45f8e2787b7dea77ec12e68.scope: Deactivated successfully.
Dec  4 05:44:36 np0005545273 podman[253600]: 2025-12-04 10:44:36.738503477 +0000 UTC m=+0.138202781 container died 87f1fc0efa68489c70d5a44834ec74e29cd856e5a45f8e2787b7dea77ec12e68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_meninsky, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec  4 05:44:36 np0005545273 systemd[1]: var-lib-containers-storage-overlay-890b9a7e75528141c9312b58b563ecee25ee39b775c0b8da1dfa899cae451f83-merged.mount: Deactivated successfully.
Dec  4 05:44:36 np0005545273 podman[253600]: 2025-12-04 10:44:36.782287373 +0000 UTC m=+0.181986667 container remove 87f1fc0efa68489c70d5a44834ec74e29cd856e5a45f8e2787b7dea77ec12e68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_meninsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:44:36 np0005545273 systemd[1]: libpod-conmon-87f1fc0efa68489c70d5a44834ec74e29cd856e5a45f8e2787b7dea77ec12e68.scope: Deactivated successfully.
Dec  4 05:44:36 np0005545273 podman[253639]: 2025-12-04 10:44:36.973255872 +0000 UTC m=+0.042532797 container create b3c6d07633ee82d7ed822b4f028afcc9279c99cd9dcbb6fea00f06a9b0f99971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_jemison, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec  4 05:44:37 np0005545273 systemd[1]: Started libpod-conmon-b3c6d07633ee82d7ed822b4f028afcc9279c99cd9dcbb6fea00f06a9b0f99971.scope.
Dec  4 05:44:37 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:44:37 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/129bd522ec3ad9ea7b6c3e4ade1147b1a7f9e7750f902bc6588d536fadca8019/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:44:37 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/129bd522ec3ad9ea7b6c3e4ade1147b1a7f9e7750f902bc6588d536fadca8019/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:44:37 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/129bd522ec3ad9ea7b6c3e4ade1147b1a7f9e7750f902bc6588d536fadca8019/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:44:37 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/129bd522ec3ad9ea7b6c3e4ade1147b1a7f9e7750f902bc6588d536fadca8019/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:44:37 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/129bd522ec3ad9ea7b6c3e4ade1147b1a7f9e7750f902bc6588d536fadca8019/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:44:37 np0005545273 podman[253639]: 2025-12-04 10:44:36.953707257 +0000 UTC m=+0.022984202 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:44:37 np0005545273 podman[253639]: 2025-12-04 10:44:37.061299567 +0000 UTC m=+0.130576522 container init b3c6d07633ee82d7ed822b4f028afcc9279c99cd9dcbb6fea00f06a9b0f99971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_jemison, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:44:37 np0005545273 podman[253639]: 2025-12-04 10:44:37.068602778 +0000 UTC m=+0.137879713 container start b3c6d07633ee82d7ed822b4f028afcc9279c99cd9dcbb6fea00f06a9b0f99971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_jemison, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:44:37 np0005545273 podman[253639]: 2025-12-04 10:44:37.072144115 +0000 UTC m=+0.141421040 container attach b3c6d07633ee82d7ed822b4f028afcc9279c99cd9dcbb6fea00f06a9b0f99971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_jemison, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Dec  4 05:44:37 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:44:37 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:44:37 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:44:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:44:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:44:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:44:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:44:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:44:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:44:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:44:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:44:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:44:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:44:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006671892818066651 of space, bias 1.0, pg target 0.20015678454199953 quantized to 32 (current 32)
Dec  4 05:44:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:44:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00022318334157806733 of space, bias 4.0, pg target 0.26782000989368077 quantized to 16 (current 32)
Dec  4 05:44:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:44:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 7.630884938464543e-07 of space, bias 1.0, pg target 0.00022892654815393631 quantized to 32 (current 32)
Dec  4 05:44:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:44:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:44:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:44:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 05:44:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:44:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:44:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:44:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 05:44:37 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "7a27b9fe-c0b9-4c84-a258-8ecce5900f59_408e5d3e-e739-41e2-98d0-543f56b49908", "force": true, "format": "json"}]: dispatch
Dec  4 05:44:37 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7a27b9fe-c0b9-4c84-a258-8ecce5900f59_408e5d3e-e739-41e2-98d0-543f56b49908, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:44:37 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1043: 321 pgs: 321 active+clean; 55 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 127 KiB/s wr, 9 op/s
Dec  4 05:44:37 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp'
Dec  4 05:44:37 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp' to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta'
Dec  4 05:44:37 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7a27b9fe-c0b9-4c84-a258-8ecce5900f59_408e5d3e-e739-41e2-98d0-543f56b49908, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:44:37 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "7a27b9fe-c0b9-4c84-a258-8ecce5900f59", "force": true, "format": "json"}]: dispatch
Dec  4 05:44:37 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7a27b9fe-c0b9-4c84-a258-8ecce5900f59, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:44:37 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp'
Dec  4 05:44:37 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp' to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta'
Dec  4 05:44:37 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7a27b9fe-c0b9-4c84-a258-8ecce5900f59, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:44:37 np0005545273 blissful_jemison[253656]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:44:37 np0005545273 blissful_jemison[253656]: --> All data devices are unavailable
Dec  4 05:44:37 np0005545273 systemd[1]: libpod-b3c6d07633ee82d7ed822b4f028afcc9279c99cd9dcbb6fea00f06a9b0f99971.scope: Deactivated successfully.
Dec  4 05:44:37 np0005545273 podman[253639]: 2025-12-04 10:44:37.523748821 +0000 UTC m=+0.593025746 container died b3c6d07633ee82d7ed822b4f028afcc9279c99cd9dcbb6fea00f06a9b0f99971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec  4 05:44:37 np0005545273 systemd[1]: var-lib-containers-storage-overlay-129bd522ec3ad9ea7b6c3e4ade1147b1a7f9e7750f902bc6588d536fadca8019-merged.mount: Deactivated successfully.
Dec  4 05:44:37 np0005545273 podman[253639]: 2025-12-04 10:44:37.566713437 +0000 UTC m=+0.635990362 container remove b3c6d07633ee82d7ed822b4f028afcc9279c99cd9dcbb6fea00f06a9b0f99971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:44:37 np0005545273 systemd[1]: libpod-conmon-b3c6d07633ee82d7ed822b4f028afcc9279c99cd9dcbb6fea00f06a9b0f99971.scope: Deactivated successfully.
Dec  4 05:44:38 np0005545273 podman[253751]: 2025-12-04 10:44:38.02463288 +0000 UTC m=+0.041202834 container create 3c62c276492e92db5643fe10ffa8539d5c5fbbe4a99b83f971cd501b6fae4ab0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_greider, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  4 05:44:38 np0005545273 systemd[1]: Started libpod-conmon-3c62c276492e92db5643fe10ffa8539d5c5fbbe4a99b83f971cd501b6fae4ab0.scope.
Dec  4 05:44:38 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:44:38 np0005545273 podman[253751]: 2025-12-04 10:44:38.006689454 +0000 UTC m=+0.023259428 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:44:38 np0005545273 podman[253751]: 2025-12-04 10:44:38.105079945 +0000 UTC m=+0.121649929 container init 3c62c276492e92db5643fe10ffa8539d5c5fbbe4a99b83f971cd501b6fae4ab0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec  4 05:44:38 np0005545273 podman[253751]: 2025-12-04 10:44:38.11415418 +0000 UTC m=+0.130724134 container start 3c62c276492e92db5643fe10ffa8539d5c5fbbe4a99b83f971cd501b6fae4ab0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:44:38 np0005545273 podman[253751]: 2025-12-04 10:44:38.117799431 +0000 UTC m=+0.134369405 container attach 3c62c276492e92db5643fe10ffa8539d5c5fbbe4a99b83f971cd501b6fae4ab0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:44:38 np0005545273 funny_greider[253767]: 167 167
Dec  4 05:44:38 np0005545273 systemd[1]: libpod-3c62c276492e92db5643fe10ffa8539d5c5fbbe4a99b83f971cd501b6fae4ab0.scope: Deactivated successfully.
Dec  4 05:44:38 np0005545273 podman[253751]: 2025-12-04 10:44:38.121487113 +0000 UTC m=+0.138057067 container died 3c62c276492e92db5643fe10ffa8539d5c5fbbe4a99b83f971cd501b6fae4ab0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  4 05:44:38 np0005545273 systemd[1]: var-lib-containers-storage-overlay-9cfeac85b3652f0646b2698ee37b9c31ae7f9e3e8c039a089e128c7c98dab6c3-merged.mount: Deactivated successfully.
Dec  4 05:44:38 np0005545273 podman[253751]: 2025-12-04 10:44:38.158916751 +0000 UTC m=+0.175486705 container remove 3c62c276492e92db5643fe10ffa8539d5c5fbbe4a99b83f971cd501b6fae4ab0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec  4 05:44:38 np0005545273 systemd[1]: libpod-conmon-3c62c276492e92db5643fe10ffa8539d5c5fbbe4a99b83f971cd501b6fae4ab0.scope: Deactivated successfully.
Dec  4 05:44:38 np0005545273 podman[253791]: 2025-12-04 10:44:38.315497176 +0000 UTC m=+0.043238794 container create 5e7b9b60f19881143cc8dcdf486068e3ab529a59588706e60fe05a144fb70b50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_merkle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:44:38 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "r", "format": "json"}]: dispatch
Dec  4 05:44:38 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:44:38 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec  4 05:44:38 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec  4 05:44:38 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice with tenant 7df6681d57a74b90abc5310588588b91
Dec  4 05:44:38 np0005545273 systemd[1]: Started libpod-conmon-5e7b9b60f19881143cc8dcdf486068e3ab529a59588706e60fe05a144fb70b50.scope.
Dec  4 05:44:38 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:44:38 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:44:38 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:44:38 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:44:38 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0422216a680b21ed136fe8f7b6b9fa4676d7ed2ce275297e25949620fa27c90/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:44:38 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0422216a680b21ed136fe8f7b6b9fa4676d7ed2ce275297e25949620fa27c90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:44:38 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0422216a680b21ed136fe8f7b6b9fa4676d7ed2ce275297e25949620fa27c90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:44:38 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0422216a680b21ed136fe8f7b6b9fa4676d7ed2ce275297e25949620fa27c90/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:44:38 np0005545273 podman[253791]: 2025-12-04 10:44:38.295602693 +0000 UTC m=+0.023344341 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:44:38 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:44:38 np0005545273 podman[253791]: 2025-12-04 10:44:38.399221264 +0000 UTC m=+0.126962892 container init 5e7b9b60f19881143cc8dcdf486068e3ab529a59588706e60fe05a144fb70b50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_merkle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:44:38 np0005545273 podman[253791]: 2025-12-04 10:44:38.408611297 +0000 UTC m=+0.136352955 container start 5e7b9b60f19881143cc8dcdf486068e3ab529a59588706e60fe05a144fb70b50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_merkle, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec  4 05:44:38 np0005545273 podman[253791]: 2025-12-04 10:44:38.412719049 +0000 UTC m=+0.140460667 container attach 5e7b9b60f19881143cc8dcdf486068e3ab529a59588706e60fe05a144fb70b50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_merkle, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]: {
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:    "0": [
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:        {
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            "devices": [
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "/dev/loop3"
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            ],
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            "lv_name": "ceph_lv0",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            "lv_size": "21470642176",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            "name": "ceph_lv0",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            "tags": {
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.cluster_name": "ceph",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.crush_device_class": "",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.encrypted": "0",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.objectstore": "bluestore",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.osd_id": "0",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.type": "block",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.vdo": "0",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.with_tpm": "0"
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            },
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            "type": "block",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            "vg_name": "ceph_vg0"
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:        }
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:    ],
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:    "1": [
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:        {
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            "devices": [
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "/dev/loop4"
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            ],
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            "lv_name": "ceph_lv1",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            "lv_size": "21470642176",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            "name": "ceph_lv1",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            "tags": {
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.cluster_name": "ceph",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.crush_device_class": "",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.encrypted": "0",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.objectstore": "bluestore",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.osd_id": "1",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.type": "block",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.vdo": "0",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.with_tpm": "0"
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            },
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            "type": "block",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            "vg_name": "ceph_vg1"
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:        }
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:    ],
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:    "2": [
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:        {
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            "devices": [
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "/dev/loop5"
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            ],
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            "lv_name": "ceph_lv2",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            "lv_size": "21470642176",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            "name": "ceph_lv2",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            "tags": {
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.cluster_name": "ceph",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.crush_device_class": "",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.encrypted": "0",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.objectstore": "bluestore",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.osd_id": "2",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.type": "block",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.vdo": "0",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:                "ceph.with_tpm": "0"
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            },
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            "type": "block",
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:            "vg_name": "ceph_vg2"
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:        }
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]:    ]
Dec  4 05:44:38 np0005545273 flamboyant_merkle[253808]: }
Dec  4 05:44:38 np0005545273 systemd[1]: libpod-5e7b9b60f19881143cc8dcdf486068e3ab529a59588706e60fe05a144fb70b50.scope: Deactivated successfully.
Dec  4 05:44:38 np0005545273 podman[253791]: 2025-12-04 10:44:38.741133268 +0000 UTC m=+0.468874906 container died 5e7b9b60f19881143cc8dcdf486068e3ab529a59588706e60fe05a144fb70b50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_merkle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:44:38 np0005545273 systemd[1]: var-lib-containers-storage-overlay-d0422216a680b21ed136fe8f7b6b9fa4676d7ed2ce275297e25949620fa27c90-merged.mount: Deactivated successfully.
Dec  4 05:44:38 np0005545273 podman[253791]: 2025-12-04 10:44:38.790542034 +0000 UTC m=+0.518283672 container remove 5e7b9b60f19881143cc8dcdf486068e3ab529a59588706e60fe05a144fb70b50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_merkle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Dec  4 05:44:38 np0005545273 systemd[1]: libpod-conmon-5e7b9b60f19881143cc8dcdf486068e3ab529a59588706e60fe05a144fb70b50.scope: Deactivated successfully.
Dec  4 05:44:39 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec  4 05:44:39 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:44:39 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:44:39 np0005545273 podman[253891]: 2025-12-04 10:44:39.232442069 +0000 UTC m=+0.040487536 container create c84b00290bc558cbb4ed7c994005870520aaf9c73c1f67f544fc209080f7b5e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec  4 05:44:39 np0005545273 systemd[1]: Started libpod-conmon-c84b00290bc558cbb4ed7c994005870520aaf9c73c1f67f544fc209080f7b5e8.scope.
Dec  4 05:44:39 np0005545273 podman[253891]: 2025-12-04 10:44:39.214132634 +0000 UTC m=+0.022178131 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:44:39 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:44:39 np0005545273 podman[253891]: 2025-12-04 10:44:39.32557171 +0000 UTC m=+0.133617197 container init c84b00290bc558cbb4ed7c994005870520aaf9c73c1f67f544fc209080f7b5e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:44:39 np0005545273 podman[253891]: 2025-12-04 10:44:39.33203806 +0000 UTC m=+0.140083527 container start c84b00290bc558cbb4ed7c994005870520aaf9c73c1f67f544fc209080f7b5e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec  4 05:44:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:44:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Dec  4 05:44:39 np0005545273 boring_lamarr[253907]: 167 167
Dec  4 05:44:39 np0005545273 systemd[1]: libpod-c84b00290bc558cbb4ed7c994005870520aaf9c73c1f67f544fc209080f7b5e8.scope: Deactivated successfully.
Dec  4 05:44:39 np0005545273 podman[253891]: 2025-12-04 10:44:39.33727249 +0000 UTC m=+0.145317957 container attach c84b00290bc558cbb4ed7c994005870520aaf9c73c1f67f544fc209080f7b5e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  4 05:44:39 np0005545273 podman[253891]: 2025-12-04 10:44:39.338140642 +0000 UTC m=+0.146186139 container died c84b00290bc558cbb4ed7c994005870520aaf9c73c1f67f544fc209080f7b5e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_lamarr, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:44:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Dec  4 05:44:39 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Dec  4 05:44:39 np0005545273 systemd[1]: var-lib-containers-storage-overlay-abcb6b4239bf0c61c9d2ff59390ea39460968e7d6186f6f3a51265e6df907dc1-merged.mount: Deactivated successfully.
Dec  4 05:44:39 np0005545273 podman[253891]: 2025-12-04 10:44:39.371749746 +0000 UTC m=+0.179795203 container remove c84b00290bc558cbb4ed7c994005870520aaf9c73c1f67f544fc209080f7b5e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_lamarr, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  4 05:44:39 np0005545273 systemd[1]: libpod-conmon-c84b00290bc558cbb4ed7c994005870520aaf9c73c1f67f544fc209080f7b5e8.scope: Deactivated successfully.
Dec  4 05:44:39 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1045: 321 pgs: 321 active+clean; 55 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 895 B/s rd, 83 KiB/s wr, 10 op/s
Dec  4 05:44:39 np0005545273 podman[253931]: 2025-12-04 10:44:39.534080524 +0000 UTC m=+0.040457035 container create 09ebf257a3582610fd0683ecb60a11244be39f46f841a6af1e7476875c6c4325 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_joliot, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  4 05:44:39 np0005545273 systemd[1]: Started libpod-conmon-09ebf257a3582610fd0683ecb60a11244be39f46f841a6af1e7476875c6c4325.scope.
Dec  4 05:44:39 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:44:39 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb01ba9f4b9422c525c061dd66bcfb4d07ac16db3eab69e272d532abbbd5303e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:44:39 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb01ba9f4b9422c525c061dd66bcfb4d07ac16db3eab69e272d532abbbd5303e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:44:39 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb01ba9f4b9422c525c061dd66bcfb4d07ac16db3eab69e272d532abbbd5303e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:44:39 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb01ba9f4b9422c525c061dd66bcfb4d07ac16db3eab69e272d532abbbd5303e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:44:39 np0005545273 podman[253931]: 2025-12-04 10:44:39.516179039 +0000 UTC m=+0.022555570 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:44:39 np0005545273 podman[253931]: 2025-12-04 10:44:39.619976335 +0000 UTC m=+0.126352866 container init 09ebf257a3582610fd0683ecb60a11244be39f46f841a6af1e7476875c6c4325 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_joliot, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  4 05:44:39 np0005545273 podman[253931]: 2025-12-04 10:44:39.626027825 +0000 UTC m=+0.132404336 container start 09ebf257a3582610fd0683ecb60a11244be39f46f841a6af1e7476875c6c4325 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_joliot, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec  4 05:44:39 np0005545273 podman[253931]: 2025-12-04 10:44:39.629159963 +0000 UTC m=+0.135536474 container attach 09ebf257a3582610fd0683ecb60a11244be39f46f841a6af1e7476875c6c4325 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_joliot, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:44:39 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "b590878f-f5a4-4c4c-97ac-af9c32c4449c", "auth_id": "admin", "format": "json"}]: dispatch
Dec  4 05:44:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:b590878f-f5a4-4c4c-97ac-af9c32c4449c, vol_name:cephfs) < ""
Dec  4 05:44:39 np0005545273 ceph-mgr[75651]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin doesn't exist
Dec  4 05:44:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:b590878f-f5a4-4c4c-97ac-af9c32c4449c, vol_name:cephfs) < ""
Dec  4 05:44:39 np0005545273 ceph-mgr[75651]: mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist
Dec  4 05:44:39 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:44:39.795+0000 7f8423c95640 -1 mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist
Dec  4 05:44:39 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b590878f-f5a4-4c4c-97ac-af9c32c4449c", "format": "json"}]: dispatch
Dec  4 05:44:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b590878f-f5a4-4c4c-97ac-af9c32c4449c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:44:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b590878f-f5a4-4c4c-97ac-af9c32c4449c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:44:39 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:44:39.974+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b590878f-f5a4-4c4c-97ac-af9c32c4449c' of type subvolume
Dec  4 05:44:39 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b590878f-f5a4-4c4c-97ac-af9c32c4449c' of type subvolume
Dec  4 05:44:39 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b590878f-f5a4-4c4c-97ac-af9c32c4449c", "force": true, "format": "json"}]: dispatch
Dec  4 05:44:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b590878f-f5a4-4c4c-97ac-af9c32c4449c, vol_name:cephfs) < ""
Dec  4 05:44:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b590878f-f5a4-4c4c-97ac-af9c32c4449c'' moved to trashcan
Dec  4 05:44:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:44:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b590878f-f5a4-4c4c-97ac-af9c32c4449c, vol_name:cephfs) < ""
Dec  4 05:44:40 np0005545273 lvm[254026]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:44:40 np0005545273 lvm[254026]: VG ceph_vg0 finished
Dec  4 05:44:40 np0005545273 lvm[254025]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:44:40 np0005545273 lvm[254025]: VG ceph_vg1 finished
Dec  4 05:44:40 np0005545273 lvm[254028]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:44:40 np0005545273 lvm[254028]: VG ceph_vg2 finished
Dec  4 05:44:40 np0005545273 lvm[254030]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:44:40 np0005545273 lvm[254030]: VG ceph_vg2 finished
Dec  4 05:44:40 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Dec  4 05:44:40 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Dec  4 05:44:40 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Dec  4 05:44:40 np0005545273 relaxed_joliot[253947]: {}
Dec  4 05:44:40 np0005545273 systemd[1]: libpod-09ebf257a3582610fd0683ecb60a11244be39f46f841a6af1e7476875c6c4325.scope: Deactivated successfully.
Dec  4 05:44:40 np0005545273 systemd[1]: libpod-09ebf257a3582610fd0683ecb60a11244be39f46f841a6af1e7476875c6c4325.scope: Consumed 1.286s CPU time.
Dec  4 05:44:40 np0005545273 podman[253931]: 2025-12-04 10:44:40.402038951 +0000 UTC m=+0.908415462 container died 09ebf257a3582610fd0683ecb60a11244be39f46f841a6af1e7476875c6c4325 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:44:40 np0005545273 systemd[1]: var-lib-containers-storage-overlay-cb01ba9f4b9422c525c061dd66bcfb4d07ac16db3eab69e272d532abbbd5303e-merged.mount: Deactivated successfully.
Dec  4 05:44:40 np0005545273 podman[253931]: 2025-12-04 10:44:40.449678013 +0000 UTC m=+0.956054524 container remove 09ebf257a3582610fd0683ecb60a11244be39f46f841a6af1e7476875c6c4325 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_joliot, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:44:40 np0005545273 systemd[1]: libpod-conmon-09ebf257a3582610fd0683ecb60a11244be39f46f841a6af1e7476875c6c4325.scope: Deactivated successfully.
Dec  4 05:44:40 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:44:40 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:44:40 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:44:40 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:44:41 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:44:41 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:44:41 np0005545273 nova_compute[244644]: 2025-12-04 10:44:41.425 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:44:41 np0005545273 nova_compute[244644]: 2025-12-04 10:44:41.425 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  4 05:44:41 np0005545273 nova_compute[244644]: 2025-12-04 10:44:41.425 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  4 05:44:41 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1047: 321 pgs: 321 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 431 B/s rd, 175 KiB/s wr, 13 op/s
Dec  4 05:44:41 np0005545273 nova_compute[244644]: 2025-12-04 10:44:41.600 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  4 05:44:42 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec  4 05:44:42 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:44:42 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec  4 05:44:42 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec  4 05:44:42 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Dec  4 05:44:42 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec  4 05:44:42 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec  4 05:44:42 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:44:42 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec  4 05:44:42 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:44:42 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec  4 05:44:42 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec  4 05:44:42 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:44:42 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:44:42 np0005545273 nova_compute[244644]: 2025-12-04 10:44:42.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:44:42 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec  4 05:44:42 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec  4 05:44:42 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec  4 05:44:43 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "342109e9-178b-44e5-bf68-2605580aac2c_9894221c-c337-4fa9-8995-71c106609676", "force": true, "format": "json"}]: dispatch
Dec  4 05:44:43 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:342109e9-178b-44e5-bf68-2605580aac2c_9894221c-c337-4fa9-8995-71c106609676, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:44:43 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp'
Dec  4 05:44:43 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp' to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta'
Dec  4 05:44:43 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:342109e9-178b-44e5-bf68-2605580aac2c_9894221c-c337-4fa9-8995-71c106609676, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:44:43 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "snap_name": "342109e9-178b-44e5-bf68-2605580aac2c", "force": true, "format": "json"}]: dispatch
Dec  4 05:44:43 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:342109e9-178b-44e5-bf68-2605580aac2c, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:44:43 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp'
Dec  4 05:44:43 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta.tmp' to config b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f/.meta'
Dec  4 05:44:43 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:342109e9-178b-44e5-bf68-2605580aac2c, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:44:43 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1048: 321 pgs: 321 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 157 KiB/s wr, 16 op/s
Dec  4 05:44:44 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "format": "json"}]: dispatch
Dec  4 05:44:44 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:44:44 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:44:44 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:44:44.029+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5260b088-bfa9-4f9a-adc0-a90d452dc12f' of type subvolume
Dec  4 05:44:44 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5260b088-bfa9-4f9a-adc0-a90d452dc12f' of type subvolume
Dec  4 05:44:44 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5260b088-bfa9-4f9a-adc0-a90d452dc12f", "force": true, "format": "json"}]: dispatch
Dec  4 05:44:44 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:44:44 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5260b088-bfa9-4f9a-adc0-a90d452dc12f'' moved to trashcan
Dec  4 05:44:44 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:44:44 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5260b088-bfa9-4f9a-adc0-a90d452dc12f, vol_name:cephfs) < ""
Dec  4 05:44:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:44:45 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:44:45.047 156095 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'aa:78:67', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:d2:c7:24:ee:78'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  4 05:44:45 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:44:45.048 156095 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  4 05:44:45 np0005545273 nova_compute[244644]: 2025-12-04 10:44:45.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:44:45 np0005545273 nova_compute[244644]: 2025-12-04 10:44:45.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:44:45 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1049: 321 pgs: 321 active+clean; 56 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 77 KiB/s wr, 11 op/s
Dec  4 05:44:45 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec  4 05:44:45 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:44:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec  4 05:44:45 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec  4 05:44:45 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice_bob with tenant 7df6681d57a74b90abc5310588588b91
Dec  4 05:44:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:44:45 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:44:45 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:44:45 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:44:45 np0005545273 podman[254070]: 2025-12-04 10:44:45.965342473 +0000 UTC m=+0.061689252 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  4 05:44:46 np0005545273 podman[254069]: 2025-12-04 10:44:46.087067853 +0000 UTC m=+0.183108915 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:44:46 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Dec  4 05:44:46 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Dec  4 05:44:46 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec  4 05:44:46 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:44:46 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:44:46 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Dec  4 05:44:47 np0005545273 nova_compute[244644]: 2025-12-04 10:44:47.337 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:44:47 np0005545273 nova_compute[244644]: 2025-12-04 10:44:47.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:44:47 np0005545273 nova_compute[244644]: 2025-12-04 10:44:47.362 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:44:47 np0005545273 nova_compute[244644]: 2025-12-04 10:44:47.362 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:44:47 np0005545273 nova_compute[244644]: 2025-12-04 10:44:47.362 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:44:47 np0005545273 nova_compute[244644]: 2025-12-04 10:44:47.362 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  4 05:44:47 np0005545273 nova_compute[244644]: 2025-12-04 10:44:47.363 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:44:47 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1051: 321 pgs: 321 active+clean; 57 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 174 KiB/s wr, 14 op/s
Dec  4 05:44:47 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:44:47 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2602699821' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:44:47 np0005545273 nova_compute[244644]: 2025-12-04 10:44:47.901 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:44:48 np0005545273 nova_compute[244644]: 2025-12-04 10:44:48.042 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  4 05:44:48 np0005545273 nova_compute[244644]: 2025-12-04 10:44:48.043 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5050MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  4 05:44:48 np0005545273 nova_compute[244644]: 2025-12-04 10:44:48.044 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:44:48 np0005545273 nova_compute[244644]: 2025-12-04 10:44:48.044 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:44:48 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:44:48.051 156095 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=565580d5-3422-4e11-b563-3f1a3db67238, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  4 05:44:48 np0005545273 nova_compute[244644]: 2025-12-04 10:44:48.112 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  4 05:44:48 np0005545273 nova_compute[244644]: 2025-12-04 10:44:48.113 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  4 05:44:48 np0005545273 nova_compute[244644]: 2025-12-04 10:44:48.132 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:44:48 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:44:48 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2660295175' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:44:48 np0005545273 nova_compute[244644]: 2025-12-04 10:44:48.661 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:44:48 np0005545273 nova_compute[244644]: 2025-12-04 10:44:48.667 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  4 05:44:48 np0005545273 nova_compute[244644]: 2025-12-04 10:44:48.686 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  4 05:44:48 np0005545273 nova_compute[244644]: 2025-12-04 10:44:48.688 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  4 05:44:48 np0005545273 nova_compute[244644]: 2025-12-04 10:44:48.689 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:44:49 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec  4 05:44:49 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:44:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:44:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Dec  4 05:44:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Dec  4 05:44:49 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Dec  4 05:44:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec  4 05:44:49 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec  4 05:44:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Dec  4 05:44:49 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec  4 05:44:49 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec  4 05:44:49 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:44:49 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec  4 05:44:49 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:44:49 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec  4 05:44:49 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec  4 05:44:49 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:44:49 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:44:49 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1053: 321 pgs: 321 active+clean; 57 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 101 KiB/s wr, 13 op/s
Dec  4 05:44:49 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec  4 05:44:49 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec  4 05:44:49 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec  4 05:44:49 np0005545273 nova_compute[244644]: 2025-12-04 10:44:49.685 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:44:49 np0005545273 nova_compute[244644]: 2025-12-04 10:44:49.685 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:44:49 np0005545273 nova_compute[244644]: 2025-12-04 10:44:49.686 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:44:49 np0005545273 nova_compute[244644]: 2025-12-04 10:44:49.686 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  4 05:44:51 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1054: 321 pgs: 321 active+clean; 57 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 129 KiB/s wr, 11 op/s
Dec  4 05:44:53 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "r", "format": "json"}]: dispatch
Dec  4 05:44:53 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:44:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec  4 05:44:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec  4 05:44:53 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice_bob with tenant 7df6681d57a74b90abc5310588588b91
Dec  4 05:44:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:44:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:44:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:44:53 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:44:53 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec  4 05:44:53 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:44:53 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:44:53 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1055: 321 pgs: 321 active+clean; 57 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 129 KiB/s wr, 12 op/s
Dec  4 05:44:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:44:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:44:54.911 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:44:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:44:54.912 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:44:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:44:54.912 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:44:55 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1056: 321 pgs: 321 active+clean; 57 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 564 B/s rd, 27 KiB/s wr, 5 op/s
Dec  4 05:44:56 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec  4 05:44:56 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:44:56 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec  4 05:44:56 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec  4 05:44:56 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Dec  4 05:44:56 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec  4 05:44:56 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec  4 05:44:56 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:44:56 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec  4 05:44:56 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:44:56 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec  4 05:44:56 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec  4 05:44:56 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:44:56 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:44:57 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec  4 05:44:57 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec  4 05:44:57 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec  4 05:44:57 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1057: 321 pgs: 321 active+clean; 57 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 48 KiB/s wr, 6 op/s
Dec  4 05:44:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:44:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:44:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:44:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:44:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:44:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:44:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:44:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Dec  4 05:44:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Dec  4 05:44:59 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Dec  4 05:44:59 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1059: 321 pgs: 321 active+clean; 57 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 54 KiB/s wr, 5 op/s
Dec  4 05:45:00 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec  4 05:45:00 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:45:00 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec  4 05:45:00 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec  4 05:45:00 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice bob with tenant 7df6681d57a74b90abc5310588588b91
Dec  4 05:45:00 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:45:00 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:45:00 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:45:00 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec  4 05:45:00 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:45:00 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:45:00 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:45:01 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1060: 321 pgs: 321 active+clean; 57 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 42 KiB/s wr, 5 op/s
Dec  4 05:45:03 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1061: 321 pgs: 321 active+clean; 58 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 70 KiB/s wr, 6 op/s
Dec  4 05:45:04 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec  4 05:45:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:45:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec  4 05:45:04 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec  4 05:45:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Dec  4 05:45:04 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec  4 05:45:04 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec  4 05:45:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:45:04 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec  4 05:45:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:45:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec  4 05:45:04 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec  4 05:45:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:45:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:45:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:45:04 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec  4 05:45:04 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec  4 05:45:04 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec  4 05:45:05 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1062: 321 pgs: 321 active+clean; 58 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 70 KiB/s wr, 6 op/s
Dec  4 05:45:05 np0005545273 podman[254163]: 2025-12-04 10:45:05.94652737 +0000 UTC m=+0.055243492 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  4 05:45:07 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1063: 321 pgs: 321 active+clean; 58 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 47 KiB/s wr, 5 op/s
Dec  4 05:45:07 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "r", "format": "json"}]: dispatch
Dec  4 05:45:07 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:45:07 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec  4 05:45:07 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec  4 05:45:07 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice bob with tenant 7df6681d57a74b90abc5310588588b91
Dec  4 05:45:07 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:45:07 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:45:07 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:45:07 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:45:08 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ab4df956-bd5e-4998-a6ef-078628986afd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:45:08 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ab4df956-bd5e-4998-a6ef-078628986afd, vol_name:cephfs) < ""
Dec  4 05:45:08 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/ab4df956-bd5e-4998-a6ef-078628986afd/5793e533-143c-4fc0-b4e1-f51624f69c54'.
Dec  4 05:45:08 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ab4df956-bd5e-4998-a6ef-078628986afd/.meta.tmp'
Dec  4 05:45:08 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ab4df956-bd5e-4998-a6ef-078628986afd/.meta.tmp' to config b'/volumes/_nogroup/ab4df956-bd5e-4998-a6ef-078628986afd/.meta'
Dec  4 05:45:08 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ab4df956-bd5e-4998-a6ef-078628986afd, vol_name:cephfs) < ""
Dec  4 05:45:08 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ab4df956-bd5e-4998-a6ef-078628986afd", "format": "json"}]: dispatch
Dec  4 05:45:08 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ab4df956-bd5e-4998-a6ef-078628986afd, vol_name:cephfs) < ""
Dec  4 05:45:08 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ab4df956-bd5e-4998-a6ef-078628986afd, vol_name:cephfs) < ""
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:08.543583) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845108543657, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2380, "num_deletes": 258, "total_data_size": 2855317, "memory_usage": 2908760, "flush_reason": "Manual Compaction"}
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845108560505, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 2805124, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21290, "largest_seqno": 23669, "table_properties": {"data_size": 2794481, "index_size": 6561, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 25596, "raw_average_key_size": 21, "raw_value_size": 2771740, "raw_average_value_size": 2325, "num_data_blocks": 290, "num_entries": 1192, "num_filter_entries": 1192, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764844965, "oldest_key_time": 1764844965, "file_creation_time": 1764845108, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 16963 microseconds, and 6485 cpu microseconds.
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:08.560557) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 2805124 bytes OK
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:08.560581) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:08.562258) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:08.562274) EVENT_LOG_v1 {"time_micros": 1764845108562270, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:08.562291) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 2844591, prev total WAL file size 2844591, number of live WAL files 2.
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:08.563069) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(2739KB)], [50(7590KB)]
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845108563126, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10577678, "oldest_snapshot_seqno": -1}
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5273 keys, 8731800 bytes, temperature: kUnknown
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845108615391, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 8731800, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8694764, "index_size": 22782, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13189, "raw_key_size": 130046, "raw_average_key_size": 24, "raw_value_size": 8598181, "raw_average_value_size": 1630, "num_data_blocks": 950, "num_entries": 5273, "num_filter_entries": 5273, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 1764845108, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:08.615689) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 8731800 bytes
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:08.617353) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 202.0 rd, 166.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 7.4 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(6.9) write-amplify(3.1) OK, records in: 5803, records dropped: 530 output_compression: NoCompression
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:08.617370) EVENT_LOG_v1 {"time_micros": 1764845108617361, "job": 26, "event": "compaction_finished", "compaction_time_micros": 52364, "compaction_time_cpu_micros": 19718, "output_level": 6, "num_output_files": 1, "total_output_size": 8731800, "num_input_records": 5803, "num_output_records": 5273, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845108617973, "job": 26, "event": "table_file_deletion", "file_number": 52}
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845108619404, "job": 26, "event": "table_file_deletion", "file_number": 50}
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:08.562996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:08.619435) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:08.619440) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:08.619441) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:08.619443) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:45:08 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:08.619445) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:45:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:45:09 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1064: 321 pgs: 321 active+clean; 58 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 202 B/s rd, 85 KiB/s wr, 8 op/s
Dec  4 05:45:11 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec  4 05:45:11 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:45:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec  4 05:45:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec  4 05:45:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Dec  4 05:45:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec  4 05:45:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec  4 05:45:11 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:45:11 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec  4 05:45:11 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:45:11 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec  4 05:45:11 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec  4 05:45:11 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:45:11 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:45:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  4 05:45:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/462392811' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec  4 05:45:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  4 05:45:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/462392811' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec  4 05:45:11 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1065: 321 pgs: 321 active+clean; 58 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 72 KiB/s wr, 6 op/s
Dec  4 05:45:11 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec  4 05:45:11 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec  4 05:45:11 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec  4 05:45:12 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f4fd84f8-9ca9-412b-a602-9496343f58ed", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:45:12 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f4fd84f8-9ca9-412b-a602-9496343f58ed, vol_name:cephfs) < ""
Dec  4 05:45:12 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/f4fd84f8-9ca9-412b-a602-9496343f58ed/85f48866-3ba8-4e88-a663-1bdf614917fb'.
Dec  4 05:45:12 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f4fd84f8-9ca9-412b-a602-9496343f58ed/.meta.tmp'
Dec  4 05:45:12 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f4fd84f8-9ca9-412b-a602-9496343f58ed/.meta.tmp' to config b'/volumes/_nogroup/f4fd84f8-9ca9-412b-a602-9496343f58ed/.meta'
Dec  4 05:45:12 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f4fd84f8-9ca9-412b-a602-9496343f58ed, vol_name:cephfs) < ""
Dec  4 05:45:12 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f4fd84f8-9ca9-412b-a602-9496343f58ed", "format": "json"}]: dispatch
Dec  4 05:45:12 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f4fd84f8-9ca9-412b-a602-9496343f58ed, vol_name:cephfs) < ""
Dec  4 05:45:12 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f4fd84f8-9ca9-412b-a602-9496343f58ed, vol_name:cephfs) < ""
Dec  4 05:45:12 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:45:12 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:45:13 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1066: 321 pgs: 321 active+clean; 58 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 92 KiB/s wr, 9 op/s
Dec  4 05:45:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:45:14 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec  4 05:45:14 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:45:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec  4 05:45:14 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec  4 05:45:14 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice with tenant 7df6681d57a74b90abc5310588588b91
Dec  4 05:45:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:45:14 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:45:14 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:45:14 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:45:14 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec  4 05:45:14 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:45:14 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:45:15 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1067: 321 pgs: 321 active+clean; 58 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 68 KiB/s wr, 6 op/s
Dec  4 05:45:16 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "51dacf2e-4d8f-4133-9ae7-8b2784f31cc5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:45:16 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:51dacf2e-4d8f-4133-9ae7-8b2784f31cc5, vol_name:cephfs) < ""
Dec  4 05:45:16 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/51dacf2e-4d8f-4133-9ae7-8b2784f31cc5/6b3c2242-9930-48fe-b0aa-20deac217a1b'.
Dec  4 05:45:16 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/51dacf2e-4d8f-4133-9ae7-8b2784f31cc5/.meta.tmp'
Dec  4 05:45:16 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/51dacf2e-4d8f-4133-9ae7-8b2784f31cc5/.meta.tmp' to config b'/volumes/_nogroup/51dacf2e-4d8f-4133-9ae7-8b2784f31cc5/.meta'
Dec  4 05:45:16 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:51dacf2e-4d8f-4133-9ae7-8b2784f31cc5, vol_name:cephfs) < ""
Dec  4 05:45:16 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "51dacf2e-4d8f-4133-9ae7-8b2784f31cc5", "format": "json"}]: dispatch
Dec  4 05:45:16 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:51dacf2e-4d8f-4133-9ae7-8b2784f31cc5, vol_name:cephfs) < ""
Dec  4 05:45:16 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:51dacf2e-4d8f-4133-9ae7-8b2784f31cc5, vol_name:cephfs) < ""
Dec  4 05:45:16 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:45:16 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:45:16 np0005545273 podman[254185]: 2025-12-04 10:45:16.953151499 +0000 UTC m=+0.055620721 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  4 05:45:16 np0005545273 podman[254184]: 2025-12-04 10:45:16.989573643 +0000 UTC m=+0.092732572 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  4 05:45:17 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1068: 321 pgs: 321 active+clean; 58 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 69 KiB/s wr, 7 op/s
Dec  4 05:45:18 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec  4 05:45:18 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:45:18 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec  4 05:45:18 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec  4 05:45:18 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Dec  4 05:45:18 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec  4 05:45:18 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec  4 05:45:18 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:45:18 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec  4 05:45:18 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:45:18 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec  4 05:45:18 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec  4 05:45:18 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:45:18 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:45:18 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec  4 05:45:18 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec  4 05:45:18 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec  4 05:45:19 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:45:19 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1069: 321 pgs: 321 active+clean; 59 MiB data, 249 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 108 KiB/s wr, 10 op/s
Dec  4 05:45:20 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f7c4e2c1-3b68-4928-815d-84ba9442cbf1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:45:20 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f7c4e2c1-3b68-4928-815d-84ba9442cbf1, vol_name:cephfs) < ""
Dec  4 05:45:20 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/f7c4e2c1-3b68-4928-815d-84ba9442cbf1/72cd89ce-4efe-4b85-aea5-dc01ea42bb59'.
Dec  4 05:45:20 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f7c4e2c1-3b68-4928-815d-84ba9442cbf1/.meta.tmp'
Dec  4 05:45:20 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f7c4e2c1-3b68-4928-815d-84ba9442cbf1/.meta.tmp' to config b'/volumes/_nogroup/f7c4e2c1-3b68-4928-815d-84ba9442cbf1/.meta'
Dec  4 05:45:20 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f7c4e2c1-3b68-4928-815d-84ba9442cbf1, vol_name:cephfs) < ""
Dec  4 05:45:20 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f7c4e2c1-3b68-4928-815d-84ba9442cbf1", "format": "json"}]: dispatch
Dec  4 05:45:20 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f7c4e2c1-3b68-4928-815d-84ba9442cbf1, vol_name:cephfs) < ""
Dec  4 05:45:20 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f7c4e2c1-3b68-4928-815d-84ba9442cbf1, vol_name:cephfs) < ""
Dec  4 05:45:20 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:45:20 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:45:21 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1070: 321 pgs: 321 active+clean; 59 MiB data, 249 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 70 KiB/s wr, 6 op/s
Dec  4 05:45:21 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "r", "format": "json"}]: dispatch
Dec  4 05:45:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:45:21 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec  4 05:45:21 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec  4 05:45:21 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice with tenant 7df6681d57a74b90abc5310588588b91
Dec  4 05:45:21 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:45:21 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:45:21 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:45:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:45:21 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec  4 05:45:21 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:45:21 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:45:22 np0005545273 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  4 05:45:22 np0005545273 ceph-osd[86021]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 8145 writes, 31K keys, 8145 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 8145 writes, 1973 syncs, 4.13 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2453 writes, 7270 keys, 2453 commit groups, 1.0 writes per commit group, ingest: 9.86 MB, 0.02 MB/s#012Interval WAL: 2453 writes, 1058 syncs, 2.32 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  4 05:45:23 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1071: 321 pgs: 321 active+clean; 59 MiB data, 249 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 123 KiB/s wr, 10 op/s
Dec  4 05:45:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:45:24 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f7c4e2c1-3b68-4928-815d-84ba9442cbf1", "format": "json"}]: dispatch
Dec  4 05:45:24 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f7c4e2c1-3b68-4928-815d-84ba9442cbf1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:45:24 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f7c4e2c1-3b68-4928-815d-84ba9442cbf1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:45:24 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:45:24.947+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f7c4e2c1-3b68-4928-815d-84ba9442cbf1' of type subvolume
Dec  4 05:45:24 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f7c4e2c1-3b68-4928-815d-84ba9442cbf1' of type subvolume
Dec  4 05:45:24 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f7c4e2c1-3b68-4928-815d-84ba9442cbf1", "force": true, "format": "json"}]: dispatch
Dec  4 05:45:24 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f7c4e2c1-3b68-4928-815d-84ba9442cbf1, vol_name:cephfs) < ""
Dec  4 05:45:24 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f7c4e2c1-3b68-4928-815d-84ba9442cbf1'' moved to trashcan
Dec  4 05:45:24 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:45:24 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f7c4e2c1-3b68-4928-815d-84ba9442cbf1, vol_name:cephfs) < ""
Dec  4 05:45:25 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec  4 05:45:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:45:25 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec  4 05:45:25 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec  4 05:45:25 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Dec  4 05:45:25 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec  4 05:45:25 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec  4 05:45:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:45:25 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec  4 05:45:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:45:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec  4 05:45:25 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec  4 05:45:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:45:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:45:25 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1072: 321 pgs: 321 active+clean; 59 MiB data, 249 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 92 KiB/s wr, 7 op/s
Dec  4 05:45:26 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec  4 05:45:26 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec  4 05:45:26 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec  4 05:45:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:45:26
Dec  4 05:45:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:45:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:45:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['default.rgw.control', 'backups', '.rgw.root', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', '.mgr', 'volumes', 'images', 'default.rgw.log', 'cephfs.cephfs.meta']
Dec  4 05:45:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:45:27 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1073: 321 pgs: 321 active+clean; 59 MiB data, 249 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 93 KiB/s wr, 8 op/s
Dec  4 05:45:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:45:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:45:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:45:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:45:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:45:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:45:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:45:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:45:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:45:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:45:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:45:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:45:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:45:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:45:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:45:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:45:28 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "51dacf2e-4d8f-4133-9ae7-8b2784f31cc5", "format": "json"}]: dispatch
Dec  4 05:45:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:51dacf2e-4d8f-4133-9ae7-8b2784f31cc5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:45:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:51dacf2e-4d8f-4133-9ae7-8b2784f31cc5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:45:28 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:45:28.758+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '51dacf2e-4d8f-4133-9ae7-8b2784f31cc5' of type subvolume
Dec  4 05:45:28 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '51dacf2e-4d8f-4133-9ae7-8b2784f31cc5' of type subvolume
Dec  4 05:45:28 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "51dacf2e-4d8f-4133-9ae7-8b2784f31cc5", "force": true, "format": "json"}]: dispatch
Dec  4 05:45:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:51dacf2e-4d8f-4133-9ae7-8b2784f31cc5, vol_name:cephfs) < ""
Dec  4 05:45:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/51dacf2e-4d8f-4133-9ae7-8b2784f31cc5'' moved to trashcan
Dec  4 05:45:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:45:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:51dacf2e-4d8f-4133-9ae7-8b2784f31cc5, vol_name:cephfs) < ""
Dec  4 05:45:28 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec  4 05:45:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:45:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec  4 05:45:28 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec  4 05:45:28 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice_bob with tenant 7df6681d57a74b90abc5310588588b91
Dec  4 05:45:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:45:28 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:45:28 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:45:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:45:29 np0005545273 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  4 05:45:29 np0005545273 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1801.0 total, 600.0 interval#012Cumulative writes: 10K writes, 38K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 10K writes, 2807 syncs, 3.71 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3259 writes, 9856 keys, 3259 commit groups, 1.0 writes per commit group, ingest: 8.44 MB, 0.01 MB/s#012Interval WAL: 3259 writes, 1412 syncs, 2.31 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  4 05:45:29 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec  4 05:45:29 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:45:29 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:45:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:45:29 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1074: 321 pgs: 321 active+clean; 60 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 126 KiB/s wr, 10 op/s
Dec  4 05:45:31 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1075: 321 pgs: 321 active+clean; 60 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 87 KiB/s wr, 8 op/s
Dec  4 05:45:32 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f4fd84f8-9ca9-412b-a602-9496343f58ed", "format": "json"}]: dispatch
Dec  4 05:45:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f4fd84f8-9ca9-412b-a602-9496343f58ed, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:45:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f4fd84f8-9ca9-412b-a602-9496343f58ed, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:45:32 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:45:32.097+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f4fd84f8-9ca9-412b-a602-9496343f58ed' of type subvolume
Dec  4 05:45:32 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f4fd84f8-9ca9-412b-a602-9496343f58ed' of type subvolume
Dec  4 05:45:32 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f4fd84f8-9ca9-412b-a602-9496343f58ed", "force": true, "format": "json"}]: dispatch
Dec  4 05:45:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f4fd84f8-9ca9-412b-a602-9496343f58ed, vol_name:cephfs) < ""
Dec  4 05:45:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f4fd84f8-9ca9-412b-a602-9496343f58ed'' moved to trashcan
Dec  4 05:45:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:45:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f4fd84f8-9ca9-412b-a602-9496343f58ed, vol_name:cephfs) < ""
Dec  4 05:45:32 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec  4 05:45:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:45:32 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec  4 05:45:32 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec  4 05:45:32 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Dec  4 05:45:32 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec  4 05:45:32 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec  4 05:45:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:45:32 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec  4 05:45:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:45:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec  4 05:45:32 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec  4 05:45:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:45:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:45:33 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec  4 05:45:33 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec  4 05:45:33 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec  4 05:45:33 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1076: 321 pgs: 321 active+clean; 60 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 146 KiB/s wr, 12 op/s
Dec  4 05:45:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:45:35 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1077: 321 pgs: 321 active+clean; 60 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 94 KiB/s wr, 7 op/s
Dec  4 05:45:35 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ab4df956-bd5e-4998-a6ef-078628986afd", "format": "json"}]: dispatch
Dec  4 05:45:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ab4df956-bd5e-4998-a6ef-078628986afd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:45:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ab4df956-bd5e-4998-a6ef-078628986afd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:45:35 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:45:35.611+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ab4df956-bd5e-4998-a6ef-078628986afd' of type subvolume
Dec  4 05:45:35 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ab4df956-bd5e-4998-a6ef-078628986afd' of type subvolume
Dec  4 05:45:35 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ab4df956-bd5e-4998-a6ef-078628986afd", "force": true, "format": "json"}]: dispatch
Dec  4 05:45:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ab4df956-bd5e-4998-a6ef-078628986afd, vol_name:cephfs) < ""
Dec  4 05:45:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ab4df956-bd5e-4998-a6ef-078628986afd'' moved to trashcan
Dec  4 05:45:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:45:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ab4df956-bd5e-4998-a6ef-078628986afd, vol_name:cephfs) < ""
Dec  4 05:45:35 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "r", "format": "json"}]: dispatch
Dec  4 05:45:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:45:35 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec  4 05:45:35 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec  4 05:45:35 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice_bob with tenant 7df6681d57a74b90abc5310588588b91
Dec  4 05:45:35 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:45:35 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:45:35 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:45:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:45:36 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec  4 05:45:36 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:45:36 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:45:36 np0005545273 podman[254228]: 2025-12-04 10:45:36.978225532 +0000 UTC m=+0.078702601 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  4 05:45:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:45:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:45:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:45:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:45:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:45:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:45:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:45:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:45:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:45:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:45:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000667238154743242 of space, bias 1.0, pg target 0.2001714464229726 quantized to 32 (current 32)
Dec  4 05:45:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:45:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0003141884134020231 of space, bias 4.0, pg target 0.3770260960824277 quantized to 16 (current 32)
Dec  4 05:45:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:45:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:45:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:45:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:45:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:45:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 05:45:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:45:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:45:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:45:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 05:45:37 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1078: 321 pgs: 321 active+clean; 61 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 125 KiB/s wr, 10 op/s
Dec  4 05:45:37 np0005545273 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  4 05:45:37 np0005545273 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 8989 writes, 34K keys, 8989 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 8989 writes, 2320 syncs, 3.87 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3286 writes, 10K keys, 3286 commit groups, 1.0 writes per commit group, ingest: 13.71 MB, 0.02 MB/s#012Interval WAL: 3286 writes, 1418 syncs, 2.32 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  4 05:45:39 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec  4 05:45:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:45:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec  4 05:45:39 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec  4 05:45:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Dec  4 05:45:39 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec  4 05:45:39 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec  4 05:45:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:45:39 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec  4 05:45:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:45:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec  4 05:45:39 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec  4 05:45:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:45:39 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:45:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:45:39 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1079: 321 pgs: 321 active+clean; 61 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 125 KiB/s wr, 11 op/s
Dec  4 05:45:39 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec  4 05:45:39 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec  4 05:45:39 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec  4 05:45:41 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1080: 321 pgs: 321 active+clean; 61 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 92 KiB/s wr, 9 op/s
Dec  4 05:45:41 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:45:41 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:45:41 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:45:41 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:45:41 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:45:41 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:45:41 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:45:41 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:45:41 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:45:41 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:45:41 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:45:41 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:45:41 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:45:41 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:45:41 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:45:41 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:45:42 np0005545273 podman[254463]: 2025-12-04 10:45:42.283732223 +0000 UTC m=+0.103097360 container create 1f530f9535bdc17c6e069091bf00907204a94505e70a236fedab34b0c4c32783 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_bose, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec  4 05:45:42 np0005545273 podman[254463]: 2025-12-04 10:45:42.202293705 +0000 UTC m=+0.021658862 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:45:42 np0005545273 systemd[1]: Started libpod-conmon-1f530f9535bdc17c6e069091bf00907204a94505e70a236fedab34b0c4c32783.scope.
Dec  4 05:45:42 np0005545273 nova_compute[244644]: 2025-12-04 10:45:42.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:45:42 np0005545273 nova_compute[244644]: 2025-12-04 10:45:42.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  4 05:45:42 np0005545273 nova_compute[244644]: 2025-12-04 10:45:42.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  4 05:45:42 np0005545273 nova_compute[244644]: 2025-12-04 10:45:42.352 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  4 05:45:42 np0005545273 nova_compute[244644]: 2025-12-04 10:45:42.353 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:45:42 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:45:42 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec  4 05:45:42 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:45:42 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec  4 05:45:42 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec  4 05:45:42 np0005545273 podman[254463]: 2025-12-04 10:45:42.97288472 +0000 UTC m=+0.792249887 container init 1f530f9535bdc17c6e069091bf00907204a94505e70a236fedab34b0c4c32783 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:45:42 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice bob with tenant 7df6681d57a74b90abc5310588588b91
Dec  4 05:45:42 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:45:42 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:45:42 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:45:42 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:45:42 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:45:42 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec  4 05:45:42 np0005545273 podman[254463]: 2025-12-04 10:45:42.980840076 +0000 UTC m=+0.800205213 container start 1f530f9535bdc17c6e069091bf00907204a94505e70a236fedab34b0c4c32783 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_bose, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:45:42 np0005545273 agitated_bose[254480]: 167 167
Dec  4 05:45:42 np0005545273 systemd[1]: libpod-1f530f9535bdc17c6e069091bf00907204a94505e70a236fedab34b0c4c32783.scope: Deactivated successfully.
Dec  4 05:45:42 np0005545273 podman[254463]: 2025-12-04 10:45:42.987771176 +0000 UTC m=+0.807136343 container attach 1f530f9535bdc17c6e069091bf00907204a94505e70a236fedab34b0c4c32783 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_bose, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:45:42 np0005545273 podman[254463]: 2025-12-04 10:45:42.988285038 +0000 UTC m=+0.807650175 container died 1f530f9535bdc17c6e069091bf00907204a94505e70a236fedab34b0c4c32783 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_bose, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  4 05:45:43 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:45:43 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:45:43 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:45:43 np0005545273 systemd[1]: var-lib-containers-storage-overlay-9c9d5db16fe0e9ee1ba840e6216aef8384abe22e63c5501774948a5e70ed2001-merged.mount: Deactivated successfully.
Dec  4 05:45:43 np0005545273 podman[254463]: 2025-12-04 10:45:43.078371599 +0000 UTC m=+0.897736726 container remove 1f530f9535bdc17c6e069091bf00907204a94505e70a236fedab34b0c4c32783 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:45:43 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:45:43 np0005545273 systemd[1]: libpod-conmon-1f530f9535bdc17c6e069091bf00907204a94505e70a236fedab34b0c4c32783.scope: Deactivated successfully.
Dec  4 05:45:43 np0005545273 podman[254502]: 2025-12-04 10:45:43.265722336 +0000 UTC m=+0.064344880 container create 3c68e148a2dcc3e6a7bbe3892b1578b20abfcb9e98c7957202b426aa7247ae5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  4 05:45:43 np0005545273 systemd[1]: Started libpod-conmon-3c68e148a2dcc3e6a7bbe3892b1578b20abfcb9e98c7957202b426aa7247ae5d.scope.
Dec  4 05:45:43 np0005545273 podman[254502]: 2025-12-04 10:45:43.225476638 +0000 UTC m=+0.024099202 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:45:43 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:45:43 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52cf728bd2e9a555adea9c77d0bb48af1c9dba19de23bcd92b0167ba7389332/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:45:43 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52cf728bd2e9a555adea9c77d0bb48af1c9dba19de23bcd92b0167ba7389332/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:45:43 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52cf728bd2e9a555adea9c77d0bb48af1c9dba19de23bcd92b0167ba7389332/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:45:43 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52cf728bd2e9a555adea9c77d0bb48af1c9dba19de23bcd92b0167ba7389332/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:45:43 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52cf728bd2e9a555adea9c77d0bb48af1c9dba19de23bcd92b0167ba7389332/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:45:43 np0005545273 podman[254502]: 2025-12-04 10:45:43.396243468 +0000 UTC m=+0.194866062 container init 3c68e148a2dcc3e6a7bbe3892b1578b20abfcb9e98c7957202b426aa7247ae5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:45:43 np0005545273 podman[254502]: 2025-12-04 10:45:43.404649124 +0000 UTC m=+0.203271678 container start 3c68e148a2dcc3e6a7bbe3892b1578b20abfcb9e98c7957202b426aa7247ae5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec  4 05:45:43 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1081: 321 pgs: 321 active+clean; 61 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 118 KiB/s wr, 10 op/s
Dec  4 05:45:43 np0005545273 podman[254502]: 2025-12-04 10:45:43.504259418 +0000 UTC m=+0.302882052 container attach 3c68e148a2dcc3e6a7bbe3892b1578b20abfcb9e98c7957202b426aa7247ae5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_wilson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:45:43 np0005545273 ceph-mgr[75651]: [devicehealth INFO root] Check health
Dec  4 05:45:43 np0005545273 boring_wilson[254518]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:45:43 np0005545273 boring_wilson[254518]: --> All data devices are unavailable
Dec  4 05:45:43 np0005545273 systemd[1]: libpod-3c68e148a2dcc3e6a7bbe3892b1578b20abfcb9e98c7957202b426aa7247ae5d.scope: Deactivated successfully.
Dec  4 05:45:43 np0005545273 podman[254502]: 2025-12-04 10:45:43.884260191 +0000 UTC m=+0.682882735 container died 3c68e148a2dcc3e6a7bbe3892b1578b20abfcb9e98c7957202b426aa7247ae5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_wilson, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:45:44 np0005545273 systemd[1]: var-lib-containers-storage-overlay-c52cf728bd2e9a555adea9c77d0bb48af1c9dba19de23bcd92b0167ba7389332-merged.mount: Deactivated successfully.
Dec  4 05:45:44 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:45:44 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:45:44 np0005545273 podman[254502]: 2025-12-04 10:45:44.070794528 +0000 UTC m=+0.869417082 container remove 3c68e148a2dcc3e6a7bbe3892b1578b20abfcb9e98c7957202b426aa7247ae5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_wilson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  4 05:45:44 np0005545273 systemd[1]: libpod-conmon-3c68e148a2dcc3e6a7bbe3892b1578b20abfcb9e98c7957202b426aa7247ae5d.scope: Deactivated successfully.
Dec  4 05:45:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:45:44 np0005545273 podman[254613]: 2025-12-04 10:45:44.555410668 +0000 UTC m=+0.040818193 container create eb421e4f0a35deaf6cb307d936de13321e9b4eab5904c741c26c85f05bef9895 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_easley, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec  4 05:45:44 np0005545273 systemd[1]: Started libpod-conmon-eb421e4f0a35deaf6cb307d936de13321e9b4eab5904c741c26c85f05bef9895.scope.
Dec  4 05:45:44 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:45:44 np0005545273 podman[254613]: 2025-12-04 10:45:44.630765416 +0000 UTC m=+0.116172961 container init eb421e4f0a35deaf6cb307d936de13321e9b4eab5904c741c26c85f05bef9895 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_easley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:45:44 np0005545273 podman[254613]: 2025-12-04 10:45:44.538153824 +0000 UTC m=+0.023561369 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:45:44 np0005545273 podman[254613]: 2025-12-04 10:45:44.636871006 +0000 UTC m=+0.122278531 container start eb421e4f0a35deaf6cb307d936de13321e9b4eab5904c741c26c85f05bef9895 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_easley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec  4 05:45:44 np0005545273 podman[254613]: 2025-12-04 10:45:44.63989922 +0000 UTC m=+0.125306745 container attach eb421e4f0a35deaf6cb307d936de13321e9b4eab5904c741c26c85f05bef9895 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec  4 05:45:44 np0005545273 zen_easley[254629]: 167 167
Dec  4 05:45:44 np0005545273 systemd[1]: libpod-eb421e4f0a35deaf6cb307d936de13321e9b4eab5904c741c26c85f05bef9895.scope: Deactivated successfully.
Dec  4 05:45:44 np0005545273 podman[254613]: 2025-12-04 10:45:44.643800826 +0000 UTC m=+0.129208371 container died eb421e4f0a35deaf6cb307d936de13321e9b4eab5904c741c26c85f05bef9895 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_easley, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Dec  4 05:45:44 np0005545273 systemd[1]: var-lib-containers-storage-overlay-4dcb050ebfb1117d7aedcae894343d170ef9fd713ddd806043363e70ed34d844-merged.mount: Deactivated successfully.
Dec  4 05:45:44 np0005545273 podman[254613]: 2025-12-04 10:45:44.684219358 +0000 UTC m=+0.169626883 container remove eb421e4f0a35deaf6cb307d936de13321e9b4eab5904c741c26c85f05bef9895 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_easley, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:45:44 np0005545273 systemd[1]: libpod-conmon-eb421e4f0a35deaf6cb307d936de13321e9b4eab5904c741c26c85f05bef9895.scope: Deactivated successfully.
Dec  4 05:45:44 np0005545273 podman[254653]: 2025-12-04 10:45:44.818682357 +0000 UTC m=+0.024560284 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:45:45 np0005545273 podman[254653]: 2025-12-04 10:45:45.051803205 +0000 UTC m=+0.257681122 container create aa5053575068bc9dac4cb4441474796e6746f56b59fb0b335c2eac5ccb13f234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_khorana, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:45:45 np0005545273 systemd[1]: Started libpod-conmon-aa5053575068bc9dac4cb4441474796e6746f56b59fb0b335c2eac5ccb13f234.scope.
Dec  4 05:45:45 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:45:45 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/113fae03c4494d473e82b2c6e31cfd9e40af242d30b4063b0682e0068677a70f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:45:45 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/113fae03c4494d473e82b2c6e31cfd9e40af242d30b4063b0682e0068677a70f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:45:45 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/113fae03c4494d473e82b2c6e31cfd9e40af242d30b4063b0682e0068677a70f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:45:45 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/113fae03c4494d473e82b2c6e31cfd9e40af242d30b4063b0682e0068677a70f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:45:45 np0005545273 podman[254653]: 2025-12-04 10:45:45.140723118 +0000 UTC m=+0.346601045 container init aa5053575068bc9dac4cb4441474796e6746f56b59fb0b335c2eac5ccb13f234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_khorana, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  4 05:45:45 np0005545273 podman[254653]: 2025-12-04 10:45:45.150383175 +0000 UTC m=+0.356261082 container start aa5053575068bc9dac4cb4441474796e6746f56b59fb0b335c2eac5ccb13f234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_khorana, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec  4 05:45:45 np0005545273 podman[254653]: 2025-12-04 10:45:45.155559871 +0000 UTC m=+0.361437808 container attach aa5053575068bc9dac4cb4441474796e6746f56b59fb0b335c2eac5ccb13f234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_khorana, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  4 05:45:45 np0005545273 nova_compute[244644]: 2025-12-04 10:45:45.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]: {
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:    "0": [
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:        {
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            "devices": [
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "/dev/loop3"
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            ],
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            "lv_name": "ceph_lv0",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            "lv_size": "21470642176",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            "name": "ceph_lv0",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            "tags": {
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.cluster_name": "ceph",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.crush_device_class": "",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.encrypted": "0",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.objectstore": "bluestore",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.osd_id": "0",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.type": "block",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.vdo": "0",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.with_tpm": "0"
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            },
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            "type": "block",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            "vg_name": "ceph_vg0"
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:        }
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:    ],
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:    "1": [
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:        {
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            "devices": [
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "/dev/loop4"
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            ],
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            "lv_name": "ceph_lv1",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            "lv_size": "21470642176",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            "name": "ceph_lv1",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            "tags": {
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.cluster_name": "ceph",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.crush_device_class": "",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.encrypted": "0",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.objectstore": "bluestore",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.osd_id": "1",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.type": "block",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.vdo": "0",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.with_tpm": "0"
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            },
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            "type": "block",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            "vg_name": "ceph_vg1"
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:        }
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:    ],
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:    "2": [
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:        {
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            "devices": [
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "/dev/loop5"
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            ],
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            "lv_name": "ceph_lv2",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            "lv_size": "21470642176",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            "name": "ceph_lv2",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            "tags": {
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.cluster_name": "ceph",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.crush_device_class": "",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.encrypted": "0",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.objectstore": "bluestore",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.osd_id": "2",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.type": "block",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.vdo": "0",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:                "ceph.with_tpm": "0"
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            },
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            "type": "block",
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:            "vg_name": "ceph_vg2"
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:        }
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]:    ]
Dec  4 05:45:45 np0005545273 unruffled_khorana[254670]: }
Dec  4 05:45:45 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1082: 321 pgs: 321 active+clean; 61 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 58 KiB/s wr, 6 op/s
Dec  4 05:45:45 np0005545273 systemd[1]: libpod-aa5053575068bc9dac4cb4441474796e6746f56b59fb0b335c2eac5ccb13f234.scope: Deactivated successfully.
Dec  4 05:45:45 np0005545273 podman[254653]: 2025-12-04 10:45:45.509427473 +0000 UTC m=+0.715305380 container died aa5053575068bc9dac4cb4441474796e6746f56b59fb0b335c2eac5ccb13f234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_khorana, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:45:45 np0005545273 systemd[1]: var-lib-containers-storage-overlay-113fae03c4494d473e82b2c6e31cfd9e40af242d30b4063b0682e0068677a70f-merged.mount: Deactivated successfully.
Dec  4 05:45:45 np0005545273 podman[254653]: 2025-12-04 10:45:45.558111148 +0000 UTC m=+0.763989045 container remove aa5053575068bc9dac4cb4441474796e6746f56b59fb0b335c2eac5ccb13f234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:45:45 np0005545273 systemd[1]: libpod-conmon-aa5053575068bc9dac4cb4441474796e6746f56b59fb0b335c2eac5ccb13f234.scope: Deactivated successfully.
Dec  4 05:45:46 np0005545273 podman[254752]: 2025-12-04 10:45:46.002320006 +0000 UTC m=+0.045723583 container create 216cc9226620d1b90774e8656a16da3ccd1a99ffa18bb2e4086c784851b9179c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_wilson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:45:46 np0005545273 systemd[1]: Started libpod-conmon-216cc9226620d1b90774e8656a16da3ccd1a99ffa18bb2e4086c784851b9179c.scope.
Dec  4 05:45:46 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:45:46 np0005545273 podman[254752]: 2025-12-04 10:45:46.078911645 +0000 UTC m=+0.122315152 container init 216cc9226620d1b90774e8656a16da3ccd1a99ffa18bb2e4086c784851b9179c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_wilson, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:45:46 np0005545273 podman[254752]: 2025-12-04 10:45:45.983419673 +0000 UTC m=+0.026823200 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:45:46 np0005545273 podman[254752]: 2025-12-04 10:45:46.084903362 +0000 UTC m=+0.128306869 container start 216cc9226620d1b90774e8656a16da3ccd1a99ffa18bb2e4086c784851b9179c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_wilson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  4 05:45:46 np0005545273 podman[254752]: 2025-12-04 10:45:46.088416979 +0000 UTC m=+0.131820486 container attach 216cc9226620d1b90774e8656a16da3ccd1a99ffa18bb2e4086c784851b9179c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:45:46 np0005545273 friendly_wilson[254768]: 167 167
Dec  4 05:45:46 np0005545273 systemd[1]: libpod-216cc9226620d1b90774e8656a16da3ccd1a99ffa18bb2e4086c784851b9179c.scope: Deactivated successfully.
Dec  4 05:45:46 np0005545273 podman[254752]: 2025-12-04 10:45:46.091453283 +0000 UTC m=+0.134856790 container died 216cc9226620d1b90774e8656a16da3ccd1a99ffa18bb2e4086c784851b9179c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:45:46 np0005545273 systemd[1]: var-lib-containers-storage-overlay-ef4e9ebdd23098cbd9be01f32ba2121d214c8d8913c6c72830245e8f1f788b9e-merged.mount: Deactivated successfully.
Dec  4 05:45:46 np0005545273 podman[254752]: 2025-12-04 10:45:46.138301132 +0000 UTC m=+0.181704629 container remove 216cc9226620d1b90774e8656a16da3ccd1a99ffa18bb2e4086c784851b9179c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_wilson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:45:46 np0005545273 systemd[1]: libpod-conmon-216cc9226620d1b90774e8656a16da3ccd1a99ffa18bb2e4086c784851b9179c.scope: Deactivated successfully.
Dec  4 05:45:46 np0005545273 podman[254792]: 2025-12-04 10:45:46.294539606 +0000 UTC m=+0.041064278 container create d72929f73398881bcf505ca5570c88eac5feaf160119c797a06cfb92a3f7df4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec  4 05:45:46 np0005545273 systemd[1]: Started libpod-conmon-d72929f73398881bcf505ca5570c88eac5feaf160119c797a06cfb92a3f7df4c.scope.
Dec  4 05:45:46 np0005545273 nova_compute[244644]: 2025-12-04 10:45:46.336 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:45:46 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:45:46 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d351dc25aa186075c097d8e145845498864796161886bae72bbf92340cb96e7a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:45:46 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d351dc25aa186075c097d8e145845498864796161886bae72bbf92340cb96e7a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:45:46 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d351dc25aa186075c097d8e145845498864796161886bae72bbf92340cb96e7a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:45:46 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d351dc25aa186075c097d8e145845498864796161886bae72bbf92340cb96e7a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:45:46 np0005545273 podman[254792]: 2025-12-04 10:45:46.275924129 +0000 UTC m=+0.022448831 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:45:46 np0005545273 podman[254792]: 2025-12-04 10:45:46.384062332 +0000 UTC m=+0.130587034 container init d72929f73398881bcf505ca5570c88eac5feaf160119c797a06cfb92a3f7df4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_turing, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:45:46 np0005545273 podman[254792]: 2025-12-04 10:45:46.392753825 +0000 UTC m=+0.139278507 container start d72929f73398881bcf505ca5570c88eac5feaf160119c797a06cfb92a3f7df4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  4 05:45:46 np0005545273 podman[254792]: 2025-12-04 10:45:46.396546528 +0000 UTC m=+0.143071240 container attach d72929f73398881bcf505ca5570c88eac5feaf160119c797a06cfb92a3f7df4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:45:46 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec  4 05:45:46 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:45:46 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec  4 05:45:46 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec  4 05:45:46 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Dec  4 05:45:46 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec  4 05:45:46 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec  4 05:45:46 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:45:46 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec  4 05:45:46 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:45:46 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec  4 05:45:46 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec  4 05:45:46 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:45:46 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:45:47 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec  4 05:45:47 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec  4 05:45:47 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec  4 05:45:47 np0005545273 lvm[254910]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:45:47 np0005545273 lvm[254909]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:45:47 np0005545273 lvm[254909]: VG ceph_vg1 finished
Dec  4 05:45:47 np0005545273 lvm[254910]: VG ceph_vg2 finished
Dec  4 05:45:47 np0005545273 lvm[254907]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:45:47 np0005545273 lvm[254907]: VG ceph_vg0 finished
Dec  4 05:45:47 np0005545273 podman[254886]: 2025-12-04 10:45:47.144577981 +0000 UTC m=+0.055906633 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec  4 05:45:47 np0005545273 podman[254884]: 2025-12-04 10:45:47.184147731 +0000 UTC m=+0.097974715 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  4 05:45:47 np0005545273 flamboyant_turing[254808]: {}
Dec  4 05:45:47 np0005545273 systemd[1]: libpod-d72929f73398881bcf505ca5570c88eac5feaf160119c797a06cfb92a3f7df4c.scope: Deactivated successfully.
Dec  4 05:45:47 np0005545273 systemd[1]: libpod-d72929f73398881bcf505ca5570c88eac5feaf160119c797a06cfb92a3f7df4c.scope: Consumed 1.422s CPU time.
Dec  4 05:45:47 np0005545273 podman[254792]: 2025-12-04 10:45:47.227687899 +0000 UTC m=+0.974212581 container died d72929f73398881bcf505ca5570c88eac5feaf160119c797a06cfb92a3f7df4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_turing, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:45:47 np0005545273 systemd[1]: var-lib-containers-storage-overlay-d351dc25aa186075c097d8e145845498864796161886bae72bbf92340cb96e7a-merged.mount: Deactivated successfully.
Dec  4 05:45:47 np0005545273 podman[254792]: 2025-12-04 10:45:47.278384004 +0000 UTC m=+1.024908686 container remove d72929f73398881bcf505ca5570c88eac5feaf160119c797a06cfb92a3f7df4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_turing, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:45:47 np0005545273 systemd[1]: libpod-conmon-d72929f73398881bcf505ca5570c88eac5feaf160119c797a06cfb92a3f7df4c.scope: Deactivated successfully.
Dec  4 05:45:47 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:45:47 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:45:47 np0005545273 nova_compute[244644]: 2025-12-04 10:45:47.337 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:45:47 np0005545273 nova_compute[244644]: 2025-12-04 10:45:47.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:45:47 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:45:47 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:45:47 np0005545273 nova_compute[244644]: 2025-12-04 10:45:47.361 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:45:47 np0005545273 nova_compute[244644]: 2025-12-04 10:45:47.361 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:45:47 np0005545273 nova_compute[244644]: 2025-12-04 10:45:47.362 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:45:47 np0005545273 nova_compute[244644]: 2025-12-04 10:45:47.362 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  4 05:45:47 np0005545273 nova_compute[244644]: 2025-12-04 10:45:47.362 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:45:47 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1083: 321 pgs: 321 active+clean; 62 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 94 KiB/s wr, 10 op/s
Dec  4 05:45:47 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:45:47 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2573814568' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:45:47 np0005545273 nova_compute[244644]: 2025-12-04 10:45:47.915 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:45:48 np0005545273 nova_compute[244644]: 2025-12-04 10:45:48.089 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  4 05:45:48 np0005545273 nova_compute[244644]: 2025-12-04 10:45:48.090 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5032MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  4 05:45:48 np0005545273 nova_compute[244644]: 2025-12-04 10:45:48.090 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:45:48 np0005545273 nova_compute[244644]: 2025-12-04 10:45:48.091 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:45:48 np0005545273 nova_compute[244644]: 2025-12-04 10:45:48.158 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  4 05:45:48 np0005545273 nova_compute[244644]: 2025-12-04 10:45:48.159 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  4 05:45:48 np0005545273 nova_compute[244644]: 2025-12-04 10:45:48.176 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:45:48 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:45:48 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:45:48 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:45:48 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4275460644' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:45:48 np0005545273 nova_compute[244644]: 2025-12-04 10:45:48.699 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:45:48 np0005545273 nova_compute[244644]: 2025-12-04 10:45:48.705 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  4 05:45:48 np0005545273 nova_compute[244644]: 2025-12-04 10:45:48.721 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  4 05:45:48 np0005545273 nova_compute[244644]: 2025-12-04 10:45:48.723 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  4 05:45:48 np0005545273 nova_compute[244644]: 2025-12-04 10:45:48.723 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:45:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:45:49 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1084: 321 pgs: 321 active+clean; 62 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 63 KiB/s wr, 7 op/s
Dec  4 05:45:50 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "r", "format": "json"}]: dispatch
Dec  4 05:45:50 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:45:50 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec  4 05:45:50 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec  4 05:45:50 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice bob with tenant 7df6681d57a74b90abc5310588588b91
Dec  4 05:45:50 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:45:50 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:45:50 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:45:50 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:45:50 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:45:50.364 156095 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'aa:78:67', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:d2:c7:24:ee:78'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  4 05:45:50 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:45:50.366 156095 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  4 05:45:50 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec  4 05:45:50 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:45:50 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:45:50 np0005545273 nova_compute[244644]: 2025-12-04 10:45:50.719 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:45:50 np0005545273 nova_compute[244644]: 2025-12-04 10:45:50.720 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:45:50 np0005545273 nova_compute[244644]: 2025-12-04 10:45:50.720 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:45:50 np0005545273 nova_compute[244644]: 2025-12-04 10:45:50.720 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:45:50 np0005545273 nova_compute[244644]: 2025-12-04 10:45:50.720 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  4 05:45:51 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:45:51.368 156095 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=565580d5-3422-4e11-b563-3f1a3db67238, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  4 05:45:51 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1085: 321 pgs: 321 active+clean; 62 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 62 KiB/s wr, 6 op/s
Dec  4 05:45:53 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1086: 321 pgs: 321 active+clean; 62 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 86 KiB/s wr, 8 op/s
Dec  4 05:45:53 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec  4 05:45:53 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:45:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec  4 05:45:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec  4 05:45:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Dec  4 05:45:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec  4 05:45:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec  4 05:45:53 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:45:53 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec  4 05:45:53 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:45:53 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec  4 05:45:53 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec  4 05:45:53 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:45:53 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:54.361782) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845154361831, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 866, "num_deletes": 260, "total_data_size": 892661, "memory_usage": 909800, "flush_reason": "Manual Compaction"}
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845154369812, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 871470, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23670, "largest_seqno": 24535, "table_properties": {"data_size": 867155, "index_size": 1903, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10179, "raw_average_key_size": 19, "raw_value_size": 858074, "raw_average_value_size": 1619, "num_data_blocks": 85, "num_entries": 530, "num_filter_entries": 530, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764845109, "oldest_key_time": 1764845109, "file_creation_time": 1764845154, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 8129 microseconds, and 3639 cpu microseconds.
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:54.369906) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 871470 bytes OK
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:54.369940) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:54.371403) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:54.371420) EVENT_LOG_v1 {"time_micros": 1764845154371414, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:54.371449) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 888115, prev total WAL file size 888115, number of live WAL files 2.
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:54.371995) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353031' seq:72057594037927935, type:22 .. '6C6F676D00373537' seq:0, type:0; will stop at (end)
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(851KB)], [53(8527KB)]
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845154372066, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 9603270, "oldest_snapshot_seqno": -1}
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5268 keys, 9503917 bytes, temperature: kUnknown
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845154432636, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 9503917, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9465394, "index_size": 24269, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13189, "raw_key_size": 131392, "raw_average_key_size": 24, "raw_value_size": 9367489, "raw_average_value_size": 1778, "num_data_blocks": 1012, "num_entries": 5268, "num_filter_entries": 5268, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 1764845154, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:54.432890) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 9503917 bytes
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:54.434440) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.3 rd, 156.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 8.3 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(21.9) write-amplify(10.9) OK, records in: 5803, records dropped: 535 output_compression: NoCompression
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:54.434457) EVENT_LOG_v1 {"time_micros": 1764845154434448, "job": 28, "event": "compaction_finished", "compaction_time_micros": 60647, "compaction_time_cpu_micros": 22746, "output_level": 6, "num_output_files": 1, "total_output_size": 9503917, "num_input_records": 5803, "num_output_records": 5268, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845154434688, "job": 28, "event": "table_file_deletion", "file_number": 55}
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845154436054, "job": 28, "event": "table_file_deletion", "file_number": 53}
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:54.371940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:54.436121) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:54.436125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:54.436127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:54.436129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:45:54.436131) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec  4 05:45:54 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec  4 05:45:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:45:54.912 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:45:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:45:54.913 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:45:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:45:54.913 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:45:55 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1087: 321 pgs: 321 active+clean; 62 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 60 KiB/s wr, 5 op/s
Dec  4 05:45:56 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "134bada8-f9d1-4734-8cb9-4d8f094ffc02", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:45:56 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:134bada8-f9d1-4734-8cb9-4d8f094ffc02, vol_name:cephfs) < ""
Dec  4 05:45:56 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/134bada8-f9d1-4734-8cb9-4d8f094ffc02/709503a4-ece9-4e76-b07e-7f97746dfdf4'.
Dec  4 05:45:56 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/134bada8-f9d1-4734-8cb9-4d8f094ffc02/.meta.tmp'
Dec  4 05:45:56 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/134bada8-f9d1-4734-8cb9-4d8f094ffc02/.meta.tmp' to config b'/volumes/_nogroup/134bada8-f9d1-4734-8cb9-4d8f094ffc02/.meta'
Dec  4 05:45:56 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:134bada8-f9d1-4734-8cb9-4d8f094ffc02, vol_name:cephfs) < ""
Dec  4 05:45:56 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "134bada8-f9d1-4734-8cb9-4d8f094ffc02", "format": "json"}]: dispatch
Dec  4 05:45:56 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:134bada8-f9d1-4734-8cb9-4d8f094ffc02, vol_name:cephfs) < ""
Dec  4 05:45:56 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:134bada8-f9d1-4734-8cb9-4d8f094ffc02, vol_name:cephfs) < ""
Dec  4 05:45:56 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:45:56 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:45:57 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec  4 05:45:57 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:45:57 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec  4 05:45:57 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec  4 05:45:57 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice with tenant 7df6681d57a74b90abc5310588588b91
Dec  4 05:45:57 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:45:57 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:45:57 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:45:57 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:45:57 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1088: 321 pgs: 321 active+clean; 62 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 86 KiB/s wr, 9 op/s
Dec  4 05:45:57 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec  4 05:45:57 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:45:57 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:45:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:45:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:45:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:45:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:45:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:45:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:45:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:45:59 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1089: 321 pgs: 321 active+clean; 62 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 50 KiB/s wr, 5 op/s
Dec  4 05:46:00 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "134bada8-f9d1-4734-8cb9-4d8f094ffc02", "snap_name": "eb21836e-156d-4fd6-adb6-75fc9fe014e2", "format": "json"}]: dispatch
Dec  4 05:46:00 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:eb21836e-156d-4fd6-adb6-75fc9fe014e2, sub_name:134bada8-f9d1-4734-8cb9-4d8f094ffc02, vol_name:cephfs) < ""
Dec  4 05:46:00 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:eb21836e-156d-4fd6-adb6-75fc9fe014e2, sub_name:134bada8-f9d1-4734-8cb9-4d8f094ffc02, vol_name:cephfs) < ""
Dec  4 05:46:00 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec  4 05:46:00 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:46:00 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec  4 05:46:00 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec  4 05:46:00 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Dec  4 05:46:00 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec  4 05:46:00 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec  4 05:46:00 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:46:00 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec  4 05:46:00 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:46:00 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec  4 05:46:00 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec  4 05:46:00 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:46:00 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:46:01 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec  4 05:46:01 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec  4 05:46:01 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec  4 05:46:01 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1090: 321 pgs: 321 active+clean; 62 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 50 KiB/s wr, 5 op/s
Dec  4 05:46:03 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1091: 321 pgs: 321 active+clean; 62 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 89 KiB/s wr, 8 op/s
Dec  4 05:46:03 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "134bada8-f9d1-4734-8cb9-4d8f094ffc02", "snap_name": "eb21836e-156d-4fd6-adb6-75fc9fe014e2", "target_sub_name": "cbd234cb-faf5-4e19-a1b6-ca47791b1043", "format": "json"}]: dispatch
Dec  4 05:46:03 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:eb21836e-156d-4fd6-adb6-75fc9fe014e2, sub_name:134bada8-f9d1-4734-8cb9-4d8f094ffc02, target_sub_name:cbd234cb-faf5-4e19-a1b6-ca47791b1043, vol_name:cephfs) < ""
Dec  4 05:46:03 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/cbd234cb-faf5-4e19-a1b6-ca47791b1043/6434388b-13b0-44fd-9f14-bc4785113c76'.
Dec  4 05:46:03 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/cbd234cb-faf5-4e19-a1b6-ca47791b1043/.meta.tmp'
Dec  4 05:46:03 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cbd234cb-faf5-4e19-a1b6-ca47791b1043/.meta.tmp' to config b'/volumes/_nogroup/cbd234cb-faf5-4e19-a1b6-ca47791b1043/.meta'
Dec  4 05:46:03 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 4fdc95a4-c293-4166-b342-259be81a8d49 for path b'/volumes/_nogroup/cbd234cb-faf5-4e19-a1b6-ca47791b1043'
Dec  4 05:46:03 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/134bada8-f9d1-4734-8cb9-4d8f094ffc02/.meta.tmp'
Dec  4 05:46:03 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/134bada8-f9d1-4734-8cb9-4d8f094ffc02/.meta.tmp' to config b'/volumes/_nogroup/134bada8-f9d1-4734-8cb9-4d8f094ffc02/.meta'
Dec  4 05:46:03 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:46:03 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.stats_util] initiating progress reporting for clones...
Dec  4 05:46:03 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.stats_util] progress reporting for clones has been initiated
Dec  4 05:46:03 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:eb21836e-156d-4fd6-adb6-75fc9fe014e2, sub_name:134bada8-f9d1-4734-8cb9-4d8f094ffc02, target_sub_name:cbd234cb-faf5-4e19-a1b6-ca47791b1043, vol_name:cephfs) < ""
Dec  4 05:46:03 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cbd234cb-faf5-4e19-a1b6-ca47791b1043", "format": "json"}]: dispatch
Dec  4 05:46:03 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:cbd234cb-faf5-4e19-a1b6-ca47791b1043, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:46:03 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:46:03.580+0000 7f8429ca1640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:46:03 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:46:03 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:46:03.580+0000 7f8429ca1640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:46:03 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:46:03 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:46:03.580+0000 7f8429ca1640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:46:03 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:46:03 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:46:03.580+0000 7f8429ca1640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:46:03 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:46:03 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:46:03.580+0000 7f8429ca1640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:46:03 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:46:03 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:cbd234cb-faf5-4e19-a1b6-ca47791b1043, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:46:03 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/cbd234cb-faf5-4e19-a1b6-ca47791b1043
Dec  4 05:46:03 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, cbd234cb-faf5-4e19-a1b6-ca47791b1043)
Dec  4 05:46:03 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:46:03.595+0000 7f8428c9f640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:46:03 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:46:03 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:46:03.595+0000 7f8428c9f640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:46:03 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:46:03 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:46:03.595+0000 7f8428c9f640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:46:03 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:46:03 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:46:03.595+0000 7f8428c9f640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:46:03 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:46:03 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:46:03.595+0000 7f8428c9f640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:46:03 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:46:03 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, cbd234cb-faf5-4e19-a1b6-ca47791b1043) -- by 0 seconds
Dec  4 05:46:03 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/cbd234cb-faf5-4e19-a1b6-ca47791b1043/.meta.tmp'
Dec  4 05:46:03 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cbd234cb-faf5-4e19-a1b6-ca47791b1043/.meta.tmp' to config b'/volumes/_nogroup/cbd234cb-faf5-4e19-a1b6-ca47791b1043/.meta'
Dec  4 05:46:04 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "r", "format": "json"}]: dispatch
Dec  4 05:46:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:46:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:46:04 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:46:04.581+0000 7f83fb176640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:46:04 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:46:04 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:46:04.581+0000 7f83fb176640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:46:04 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:46:04 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:46:04.581+0000 7f83fb176640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:46:04 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:46:04 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:46:04.581+0000 7f83fb176640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:46:04 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:46:04 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:46:04.581+0000 7f83fb176640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:46:04 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:46:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec  4 05:46:04 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec  4 05:46:04 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice with tenant 7df6681d57a74b90abc5310588588b91
Dec  4 05:46:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/134bada8-f9d1-4734-8cb9-4d8f094ffc02/.snap/eb21836e-156d-4fd6-adb6-75fc9fe014e2/709503a4-ece9-4e76-b07e-7f97746dfdf4' to b'/volumes/_nogroup/cbd234cb-faf5-4e19-a1b6-ca47791b1043/6434388b-13b0-44fd-9f14-bc4785113c76'
Dec  4 05:46:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:46:04 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:46:04 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:46:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:46:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/cbd234cb-faf5-4e19-a1b6-ca47791b1043/.meta.tmp'
Dec  4 05:46:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cbd234cb-faf5-4e19-a1b6-ca47791b1043/.meta.tmp' to config b'/volumes/_nogroup/cbd234cb-faf5-4e19-a1b6-ca47791b1043/.meta'
Dec  4 05:46:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.clone_index] untracking 4fdc95a4-c293-4166-b342-259be81a8d49
Dec  4 05:46:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/134bada8-f9d1-4734-8cb9-4d8f094ffc02/.meta.tmp'
Dec  4 05:46:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/134bada8-f9d1-4734-8cb9-4d8f094ffc02/.meta.tmp' to config b'/volumes/_nogroup/134bada8-f9d1-4734-8cb9-4d8f094ffc02/.meta'
Dec  4 05:46:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/cbd234cb-faf5-4e19-a1b6-ca47791b1043/.meta.tmp'
Dec  4 05:46:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cbd234cb-faf5-4e19-a1b6-ca47791b1043/.meta.tmp' to config b'/volumes/_nogroup/cbd234cb-faf5-4e19-a1b6-ca47791b1043/.meta'
Dec  4 05:46:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, cbd234cb-faf5-4e19-a1b6-ca47791b1043)
Dec  4 05:46:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.stats_util] Exception VolumeException was raised. Apparently an entry from the metadata file of clone source was removed because one of the clone job(s) has completed/cancelled. Therefore ignoring and proceeding Printing the exception: -22 (error fetching subvolume metadata)
Dec  4 05:46:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.stats_util] removing progress bars from "ceph status" output
Dec  4 05:46:04 np0005545273 ceph-mgr[75651]: [progress WARNING root] complete: ev mgr-vol-ongoing-clones does not exist
Dec  4 05:46:04 np0005545273 ceph-mgr[75651]: [progress WARNING root] complete: ev mgr-vol-total-clones does not exist
Dec  4 05:46:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.stats_util] finished removing progress bars from "ceph status" output
Dec  4 05:46:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.stats_util] marking this RTimer thread as finished; thread object ID - <volumes.fs.stats_util.CloneProgressReporter object at 0x7f8435ce5760>
Dec  4 05:46:05 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec  4 05:46:05 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:46:05 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:46:05 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1092: 321 pgs: 321 active+clean; 62 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 65 KiB/s wr, 6 op/s
Dec  4 05:46:05 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.iwufnj(active, since 31m)
Dec  4 05:46:07 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1093: 321 pgs: 321 active+clean; 63 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 116 KiB/s wr, 12 op/s
Dec  4 05:46:07 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec  4 05:46:07 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:46:07 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec  4 05:46:07 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec  4 05:46:07 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Dec  4 05:46:07 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec  4 05:46:07 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec  4 05:46:07 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:46:07 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec  4 05:46:07 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:46:07 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec  4 05:46:07 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec  4 05:46:07 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:46:07 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:46:07 np0005545273 podman[255057]: 2025-12-04 10:46:07.953916487 +0000 UTC m=+0.058885096 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible)
Dec  4 05:46:08 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec  4 05:46:08 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec  4 05:46:08 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec  4 05:46:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:46:09 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1094: 321 pgs: 321 active+clean; 63 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 90 KiB/s wr, 9 op/s
Dec  4 05:46:11 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec  4 05:46:11 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:46:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec  4 05:46:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec  4 05:46:11 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice_bob with tenant 7df6681d57a74b90abc5310588588b91
Dec  4 05:46:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:46:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:46:11 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1095: 321 pgs: 321 active+clean; 63 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 90 KiB/s wr, 9 op/s
Dec  4 05:46:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  4 05:46:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1471404368' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec  4 05:46:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  4 05:46:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1471404368' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec  4 05:46:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:46:12 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec  4 05:46:12 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:46:12 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:46:12 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:46:13 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1096: 321 pgs: 321 active+clean; 64 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 132 KiB/s wr, 13 op/s
Dec  4 05:46:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:46:14 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec  4 05:46:14 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:46:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec  4 05:46:14 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec  4 05:46:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Dec  4 05:46:14 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec  4 05:46:14 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec  4 05:46:14 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:46:14 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec  4 05:46:14 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:46:14 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec  4 05:46:14 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec  4 05:46:14 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:46:14 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:46:15 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec  4 05:46:15 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec  4 05:46:15 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec  4 05:46:15 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1097: 321 pgs: 321 active+clean; 64 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 93 KiB/s wr, 10 op/s
Dec  4 05:46:17 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1098: 321 pgs: 321 active+clean; 64 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 111 KiB/s wr, 12 op/s
Dec  4 05:46:17 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "r", "format": "json"}]: dispatch
Dec  4 05:46:17 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:46:17 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec  4 05:46:17 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec  4 05:46:17 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice_bob with tenant 7df6681d57a74b90abc5310588588b91
Dec  4 05:46:17 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:46:17 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:46:17 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:46:17 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:46:17 np0005545273 podman[255080]: 2025-12-04 10:46:17.951264588 +0000 UTC m=+0.050068119 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  4 05:46:17 np0005545273 podman[255079]: 2025-12-04 10:46:17.9810881 +0000 UTC m=+0.082546877 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Dec  4 05:46:18 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec  4 05:46:18 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:46:18 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:46:19 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:46:19 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1099: 321 pgs: 321 active+clean; 64 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 61 KiB/s wr, 6 op/s
Dec  4 05:46:21 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec  4 05:46:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:46:21 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec  4 05:46:21 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec  4 05:46:21 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Dec  4 05:46:21 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec  4 05:46:21 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec  4 05:46:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:46:21 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec  4 05:46:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:46:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec  4 05:46:21 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec  4 05:46:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:46:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:46:21 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1100: 321 pgs: 321 active+clean; 64 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 61 KiB/s wr, 6 op/s
Dec  4 05:46:22 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec  4 05:46:22 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec  4 05:46:22 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec  4 05:46:22 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cbd234cb-faf5-4e19-a1b6-ca47791b1043", "format": "json"}]: dispatch
Dec  4 05:46:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:cbd234cb-faf5-4e19-a1b6-ca47791b1043, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:46:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:cbd234cb-faf5-4e19-a1b6-ca47791b1043, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:46:22 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cbd234cb-faf5-4e19-a1b6-ca47791b1043", "format": "json"}]: dispatch
Dec  4 05:46:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cbd234cb-faf5-4e19-a1b6-ca47791b1043, vol_name:cephfs) < ""
Dec  4 05:46:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cbd234cb-faf5-4e19-a1b6-ca47791b1043, vol_name:cephfs) < ""
Dec  4 05:46:22 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:46:22 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:46:23 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1101: 321 pgs: 321 active+clean; 64 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 97 KiB/s wr, 9 op/s
Dec  4 05:46:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:46:24 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec  4 05:46:24 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:46:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec  4 05:46:24 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec  4 05:46:24 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice bob with tenant 7df6681d57a74b90abc5310588588b91
Dec  4 05:46:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:46:24 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:46:24 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:46:24 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:46:25 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fdc591ae-48a2-4089-a539-01382bacd19f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:46:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fdc591ae-48a2-4089-a539-01382bacd19f, vol_name:cephfs) < ""
Dec  4 05:46:25 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/fdc591ae-48a2-4089-a539-01382bacd19f/a2cad645-958b-479a-9a2c-83321704920d'.
Dec  4 05:46:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fdc591ae-48a2-4089-a539-01382bacd19f/.meta.tmp'
Dec  4 05:46:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fdc591ae-48a2-4089-a539-01382bacd19f/.meta.tmp' to config b'/volumes/_nogroup/fdc591ae-48a2-4089-a539-01382bacd19f/.meta'
Dec  4 05:46:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fdc591ae-48a2-4089-a539-01382bacd19f, vol_name:cephfs) < ""
Dec  4 05:46:25 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fdc591ae-48a2-4089-a539-01382bacd19f", "format": "json"}]: dispatch
Dec  4 05:46:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fdc591ae-48a2-4089-a539-01382bacd19f, vol_name:cephfs) < ""
Dec  4 05:46:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fdc591ae-48a2-4089-a539-01382bacd19f, vol_name:cephfs) < ""
Dec  4 05:46:25 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:46:25 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:46:25 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec  4 05:46:25 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:46:25 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:46:25 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1102: 321 pgs: 321 active+clean; 64 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 55 KiB/s wr, 5 op/s
Dec  4 05:46:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:46:26
Dec  4 05:46:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:46:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:46:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'images', 'default.rgw.control', 'default.rgw.log', 'vms', 'backups', '.rgw.root', 'volumes', 'cephfs.cephfs.data']
Dec  4 05:46:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:46:27 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1103: 321 pgs: 321 active+clean; 64 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 84 KiB/s wr, 8 op/s
Dec  4 05:46:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:46:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:46:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:46:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:46:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:46:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:46:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:46:28 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec  4 05:46:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:46:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:46:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:46:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:46:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:46:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:46:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec  4 05:46:28 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec  4 05:46:28 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Dec  4 05:46:28 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec  4 05:46:28 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec  4 05:46:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:46:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:46:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:46:28 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec  4 05:46:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:46:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec  4 05:46:28 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec  4 05:46:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:46:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:46:28 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec  4 05:46:28 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec  4 05:46:28 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec  4 05:46:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:46:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:46:28 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "fdc591ae-48a2-4089-a539-01382bacd19f", "new_size": 2147483648, "format": "json"}]: dispatch
Dec  4 05:46:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:fdc591ae-48a2-4089-a539-01382bacd19f, vol_name:cephfs) < ""
Dec  4 05:46:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:fdc591ae-48a2-4089-a539-01382bacd19f, vol_name:cephfs) < ""
Dec  4 05:46:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:46:29 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1104: 321 pgs: 321 active+clean; 64 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s wr, 6 op/s
Dec  4 05:46:31 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "r", "format": "json"}]: dispatch
Dec  4 05:46:31 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:46:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec  4 05:46:31 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec  4 05:46:31 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice bob with tenant 7df6681d57a74b90abc5310588588b91
Dec  4 05:46:31 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1105: 321 pgs: 321 active+clean; 64 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s wr, 6 op/s
Dec  4 05:46:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:46:31 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:46:31 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:46:31 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:46:32 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fdc591ae-48a2-4089-a539-01382bacd19f", "format": "json"}]: dispatch
Dec  4 05:46:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:fdc591ae-48a2-4089-a539-01382bacd19f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:46:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:fdc591ae-48a2-4089-a539-01382bacd19f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:46:32 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:46:32.430+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fdc591ae-48a2-4089-a539-01382bacd19f' of type subvolume
Dec  4 05:46:32 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fdc591ae-48a2-4089-a539-01382bacd19f' of type subvolume
Dec  4 05:46:32 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fdc591ae-48a2-4089-a539-01382bacd19f", "force": true, "format": "json"}]: dispatch
Dec  4 05:46:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fdc591ae-48a2-4089-a539-01382bacd19f, vol_name:cephfs) < ""
Dec  4 05:46:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/fdc591ae-48a2-4089-a539-01382bacd19f'' moved to trashcan
Dec  4 05:46:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:46:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fdc591ae-48a2-4089-a539-01382bacd19f, vol_name:cephfs) < ""
Dec  4 05:46:32 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec  4 05:46:32 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:46:32 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:46:33 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1106: 321 pgs: 321 active+clean; 65 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 116 KiB/s wr, 10 op/s
Dec  4 05:46:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:46:34 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec  4 05:46:34 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:46:35 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec  4 05:46:35 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec  4 05:46:35 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Dec  4 05:46:35 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec  4 05:46:35 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec  4 05:46:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:46:35 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec  4 05:46:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:46:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec  4 05:46:35 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec  4 05:46:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:46:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:46:35 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1107: 321 pgs: 321 active+clean; 65 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 80 KiB/s wr, 7 op/s
Dec  4 05:46:36 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec  4 05:46:36 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec  4 05:46:36 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec  4 05:46:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:46:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:46:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:46:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:46:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:46:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:46:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:46:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:46:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:46:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:46:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000667238154743242 of space, bias 1.0, pg target 0.2001714464229726 quantized to 32 (current 32)
Dec  4 05:46:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:46:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00038834688462743443 of space, bias 4.0, pg target 0.4660162615529213 quantized to 16 (current 32)
Dec  4 05:46:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:46:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 4.4513495474376506e-07 of space, bias 1.0, pg target 0.00013354048642312953 quantized to 32 (current 32)
Dec  4 05:46:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:46:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:46:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:46:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 05:46:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:46:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:46:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:46:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 05:46:37 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1108: 321 pgs: 321 active+clean; 65 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 107 KiB/s wr, 10 op/s
Dec  4 05:46:38 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec  4 05:46:38 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:46:38 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec  4 05:46:38 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec  4 05:46:38 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice with tenant 7df6681d57a74b90abc5310588588b91
Dec  4 05:46:38 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec  4 05:46:38 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:46:38 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:46:38 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:46:38 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:46:38 np0005545273 podman[255131]: 2025-12-04 10:46:38.962300912 +0000 UTC m=+0.063873829 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=multipathd)
Dec  4 05:46:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:46:39 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1109: 321 pgs: 321 active+clean; 65 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 77 KiB/s wr, 8 op/s
Dec  4 05:46:39 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:46:39 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:46:40 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b3ac60cc-8acd-4ed9-b323-017b1c573a49", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:46:40 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:b3ac60cc-8acd-4ed9-b323-017b1c573a49, vol_name:cephfs) < ""
Dec  4 05:46:40 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/b3ac60cc-8acd-4ed9-b323-017b1c573a49/ef9ff5da-9de6-46b4-9a76-f26d18a22519'.
Dec  4 05:46:40 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b3ac60cc-8acd-4ed9-b323-017b1c573a49/.meta.tmp'
Dec  4 05:46:40 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b3ac60cc-8acd-4ed9-b323-017b1c573a49/.meta.tmp' to config b'/volumes/_nogroup/b3ac60cc-8acd-4ed9-b323-017b1c573a49/.meta'
Dec  4 05:46:40 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:b3ac60cc-8acd-4ed9-b323-017b1c573a49, vol_name:cephfs) < ""
Dec  4 05:46:40 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b3ac60cc-8acd-4ed9-b323-017b1c573a49", "format": "json"}]: dispatch
Dec  4 05:46:40 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b3ac60cc-8acd-4ed9-b323-017b1c573a49, vol_name:cephfs) < ""
Dec  4 05:46:40 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b3ac60cc-8acd-4ed9-b323-017b1c573a49, vol_name:cephfs) < ""
Dec  4 05:46:40 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:46:40 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:46:41 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1110: 321 pgs: 321 active+clean; 65 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 77 KiB/s wr, 8 op/s
Dec  4 05:46:41 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec  4 05:46:41 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:46:41 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec  4 05:46:41 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec  4 05:46:41 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Dec  4 05:46:41 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec  4 05:46:41 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec  4 05:46:41 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:46:41 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec  4 05:46:41 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:46:41 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec  4 05:46:41 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec  4 05:46:41 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:46:41 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:46:42 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec  4 05:46:42 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec  4 05:46:42 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec  4 05:46:43 np0005545273 nova_compute[244644]: 2025-12-04 10:46:43.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:46:43 np0005545273 nova_compute[244644]: 2025-12-04 10:46:43.340 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  4 05:46:43 np0005545273 nova_compute[244644]: 2025-12-04 10:46:43.340 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  4 05:46:43 np0005545273 nova_compute[244644]: 2025-12-04 10:46:43.358 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  4 05:46:43 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "b3ac60cc-8acd-4ed9-b323-017b1c573a49", "new_size": 1073741824, "no_shrink": true, "format": "json"}]: dispatch
Dec  4 05:46:43 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:b3ac60cc-8acd-4ed9-b323-017b1c573a49, vol_name:cephfs) < ""
Dec  4 05:46:43 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:b3ac60cc-8acd-4ed9-b323-017b1c573a49, vol_name:cephfs) < ""
Dec  4 05:46:43 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1111: 321 pgs: 321 active+clean; 66 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 132 KiB/s wr, 13 op/s
Dec  4 05:46:44 np0005545273 nova_compute[244644]: 2025-12-04 10:46:44.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:46:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:46:45 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1112: 321 pgs: 321 active+clean; 66 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 82 KiB/s wr, 8 op/s
Dec  4 05:46:46 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "r", "format": "json"}]: dispatch
Dec  4 05:46:46 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:46:46 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec  4 05:46:46 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec  4 05:46:46 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice with tenant 7df6681d57a74b90abc5310588588b91
Dec  4 05:46:46 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:46:46 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:46:46 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:46:46 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:46:46 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec  4 05:46:46 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:46:46 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:46:46 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b3ac60cc-8acd-4ed9-b323-017b1c573a49", "format": "json"}]: dispatch
Dec  4 05:46:46 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b3ac60cc-8acd-4ed9-b323-017b1c573a49, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:46:46 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b3ac60cc-8acd-4ed9-b323-017b1c573a49, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:46:46 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:46:46.943+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b3ac60cc-8acd-4ed9-b323-017b1c573a49' of type subvolume
Dec  4 05:46:46 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b3ac60cc-8acd-4ed9-b323-017b1c573a49' of type subvolume
Dec  4 05:46:46 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b3ac60cc-8acd-4ed9-b323-017b1c573a49", "force": true, "format": "json"}]: dispatch
Dec  4 05:46:46 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b3ac60cc-8acd-4ed9-b323-017b1c573a49, vol_name:cephfs) < ""
Dec  4 05:46:46 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b3ac60cc-8acd-4ed9-b323-017b1c573a49'' moved to trashcan
Dec  4 05:46:46 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:46:46 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b3ac60cc-8acd-4ed9-b323-017b1c573a49, vol_name:cephfs) < ""
Dec  4 05:46:47 np0005545273 nova_compute[244644]: 2025-12-04 10:46:47.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:46:47 np0005545273 nova_compute[244644]: 2025-12-04 10:46:47.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:46:47 np0005545273 nova_compute[244644]: 2025-12-04 10:46:47.361 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:46:47 np0005545273 nova_compute[244644]: 2025-12-04 10:46:47.361 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:46:47 np0005545273 nova_compute[244644]: 2025-12-04 10:46:47.362 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:46:47 np0005545273 nova_compute[244644]: 2025-12-04 10:46:47.362 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  4 05:46:47 np0005545273 nova_compute[244644]: 2025-12-04 10:46:47.362 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:46:47 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1113: 321 pgs: 321 active+clean; 66 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 120 KiB/s wr, 11 op/s
Dec  4 05:46:47 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:46:47 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3019975673' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:46:47 np0005545273 nova_compute[244644]: 2025-12-04 10:46:47.942 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:46:48 np0005545273 nova_compute[244644]: 2025-12-04 10:46:48.105 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  4 05:46:48 np0005545273 nova_compute[244644]: 2025-12-04 10:46:48.107 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5031MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  4 05:46:48 np0005545273 nova_compute[244644]: 2025-12-04 10:46:48.107 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:46:48 np0005545273 nova_compute[244644]: 2025-12-04 10:46:48.107 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:46:48 np0005545273 nova_compute[244644]: 2025-12-04 10:46:48.189 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  4 05:46:48 np0005545273 nova_compute[244644]: 2025-12-04 10:46:48.190 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  4 05:46:48 np0005545273 nova_compute[244644]: 2025-12-04 10:46:48.221 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:46:48 np0005545273 podman[255280]: 2025-12-04 10:46:48.266688121 +0000 UTC m=+0.061496840 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3)
Dec  4 05:46:48 np0005545273 podman[255279]: 2025-12-04 10:46:48.322566442 +0000 UTC m=+0.119756969 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  4 05:46:48 np0005545273 podman[255380]: 2025-12-04 10:46:48.569048579 +0000 UTC m=+0.041174141 container create 0813016268b36d51246ed94b471d64dbaa309edb289c85e3ea2c0de048c3e658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:46:48 np0005545273 systemd[1]: Started libpod-conmon-0813016268b36d51246ed94b471d64dbaa309edb289c85e3ea2c0de048c3e658.scope.
Dec  4 05:46:48 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:46:48 np0005545273 podman[255380]: 2025-12-04 10:46:48.550578456 +0000 UTC m=+0.022704038 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:46:48 np0005545273 podman[255380]: 2025-12-04 10:46:48.661845906 +0000 UTC m=+0.133971488 container init 0813016268b36d51246ed94b471d64dbaa309edb289c85e3ea2c0de048c3e658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hawking, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  4 05:46:48 np0005545273 podman[255380]: 2025-12-04 10:46:48.670361635 +0000 UTC m=+0.142487197 container start 0813016268b36d51246ed94b471d64dbaa309edb289c85e3ea2c0de048c3e658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hawking, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec  4 05:46:48 np0005545273 podman[255380]: 2025-12-04 10:46:48.67424755 +0000 UTC m=+0.146373142 container attach 0813016268b36d51246ed94b471d64dbaa309edb289c85e3ea2c0de048c3e658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hawking, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec  4 05:46:48 np0005545273 focused_hawking[255396]: 167 167
Dec  4 05:46:48 np0005545273 systemd[1]: libpod-0813016268b36d51246ed94b471d64dbaa309edb289c85e3ea2c0de048c3e658.scope: Deactivated successfully.
Dec  4 05:46:48 np0005545273 podman[255380]: 2025-12-04 10:46:48.677128241 +0000 UTC m=+0.149253813 container died 0813016268b36d51246ed94b471d64dbaa309edb289c85e3ea2c0de048c3e658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:46:48 np0005545273 systemd[1]: var-lib-containers-storage-overlay-161597c1a231aa88bd3e1ed95e2a09b45dccd4b0b222fbe6d10940d894bc8122-merged.mount: Deactivated successfully.
Dec  4 05:46:48 np0005545273 podman[255380]: 2025-12-04 10:46:48.720783963 +0000 UTC m=+0.192909525 container remove 0813016268b36d51246ed94b471d64dbaa309edb289c85e3ea2c0de048c3e658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hawking, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec  4 05:46:48 np0005545273 systemd[1]: libpod-conmon-0813016268b36d51246ed94b471d64dbaa309edb289c85e3ea2c0de048c3e658.scope: Deactivated successfully.
Dec  4 05:46:48 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:46:48 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/167740594' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:46:48 np0005545273 nova_compute[244644]: 2025-12-04 10:46:48.783 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:46:48 np0005545273 nova_compute[244644]: 2025-12-04 10:46:48.789 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  4 05:46:48 np0005545273 nova_compute[244644]: 2025-12-04 10:46:48.812 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  4 05:46:48 np0005545273 nova_compute[244644]: 2025-12-04 10:46:48.814 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  4 05:46:48 np0005545273 nova_compute[244644]: 2025-12-04 10:46:48.814 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:46:48 np0005545273 podman[255422]: 2025-12-04 10:46:48.875761824 +0000 UTC m=+0.040078784 container create a6e7b12504b1446c45eab58358e8ae83284937719d028a1f2989b48ce73411c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_meninsky, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:46:48 np0005545273 systemd[1]: Started libpod-conmon-a6e7b12504b1446c45eab58358e8ae83284937719d028a1f2989b48ce73411c3.scope.
Dec  4 05:46:48 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:46:48 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48f53c65df92d0aeafc98fa1090e0012341a49b01e5acd2ceafc33a234d657b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:46:48 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48f53c65df92d0aeafc98fa1090e0012341a49b01e5acd2ceafc33a234d657b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:46:48 np0005545273 podman[255422]: 2025-12-04 10:46:48.860468859 +0000 UTC m=+0.024785839 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:46:48 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48f53c65df92d0aeafc98fa1090e0012341a49b01e5acd2ceafc33a234d657b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:46:48 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48f53c65df92d0aeafc98fa1090e0012341a49b01e5acd2ceafc33a234d657b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:46:48 np0005545273 podman[255422]: 2025-12-04 10:46:48.970632592 +0000 UTC m=+0.134949572 container init a6e7b12504b1446c45eab58358e8ae83284937719d028a1f2989b48ce73411c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_meninsky, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:46:48 np0005545273 podman[255422]: 2025-12-04 10:46:48.976984008 +0000 UTC m=+0.141300958 container start a6e7b12504b1446c45eab58358e8ae83284937719d028a1f2989b48ce73411c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_meninsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec  4 05:46:48 np0005545273 podman[255422]: 2025-12-04 10:46:48.980816622 +0000 UTC m=+0.145133612 container attach a6e7b12504b1446c45eab58358e8ae83284937719d028a1f2989b48ce73411c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_meninsky, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:46:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:46:49 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec  4 05:46:49 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:46:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec  4 05:46:49 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec  4 05:46:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Dec  4 05:46:49 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec  4 05:46:49 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec  4 05:46:49 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:46:49 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice", "format": "json"}]: dispatch
Dec  4 05:46:49 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:46:49 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec  4 05:46:49 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec  4 05:46:49 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:46:49 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]: [
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:    {
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:        "available": false,
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:        "being_replaced": false,
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:        "ceph_device_lvm": false,
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:        "lsm_data": {},
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:        "lvs": [],
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:        "path": "/dev/sr0",
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:        "rejected_reasons": [
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:            "Has a FileSystem",
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:            "Insufficient space (<5GB)"
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:        ],
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:        "sys_api": {
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:            "actuators": null,
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:            "device_nodes": [
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:                "sr0"
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:            ],
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:            "devname": "sr0",
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:            "human_readable_size": "482.00 KB",
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:            "id_bus": "ata",
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:            "model": "QEMU DVD-ROM",
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:            "nr_requests": "2",
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:            "parent": "/dev/sr0",
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:            "partitions": {},
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:            "path": "/dev/sr0",
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:            "removable": "1",
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:            "rev": "2.5+",
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:            "ro": "0",
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:            "rotational": "1",
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:            "sas_address": "",
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:            "sas_device_handle": "",
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:            "scheduler_mode": "mq-deadline",
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:            "sectors": 0,
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:            "sectorsize": "2048",
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:            "size": 493568.0,
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:            "support_discard": "2048",
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:            "type": "disk",
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:            "vendor": "QEMU"
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:        }
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]:    }
Dec  4 05:46:49 np0005545273 pensive_meninsky[255438]: ]
Dec  4 05:46:49 np0005545273 systemd[1]: libpod-a6e7b12504b1446c45eab58358e8ae83284937719d028a1f2989b48ce73411c3.scope: Deactivated successfully.
Dec  4 05:46:49 np0005545273 podman[255422]: 2025-12-04 10:46:49.482746697 +0000 UTC m=+0.647063657 container died a6e7b12504b1446c45eab58358e8ae83284937719d028a1f2989b48ce73411c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_meninsky, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:46:49 np0005545273 systemd[1]: var-lib-containers-storage-overlay-48f53c65df92d0aeafc98fa1090e0012341a49b01e5acd2ceafc33a234d657b7-merged.mount: Deactivated successfully.
Dec  4 05:46:49 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1114: 321 pgs: 321 active+clean; 66 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 93 KiB/s wr, 7 op/s
Dec  4 05:46:49 np0005545273 podman[255422]: 2025-12-04 10:46:49.527327071 +0000 UTC m=+0.691644041 container remove a6e7b12504b1446c45eab58358e8ae83284937719d028a1f2989b48ce73411c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_meninsky, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:46:49 np0005545273 systemd[1]: libpod-conmon-a6e7b12504b1446c45eab58358e8ae83284937719d028a1f2989b48ce73411c3.scope: Deactivated successfully.
Dec  4 05:46:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:46:49 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:46:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:46:49 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:46:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:46:49 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:46:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:46:49 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:46:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:46:49 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:46:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:46:49 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:46:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:46:49 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:46:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:46:49 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:46:49 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec  4 05:46:49 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec  4 05:46:49 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec  4 05:46:49 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:46:49 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:46:49 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:46:49 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:46:49 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:46:50 np0005545273 podman[256248]: 2025-12-04 10:46:50.013917908 +0000 UTC m=+0.041347934 container create fc3aa1b0bd35e986d3355347b9595ca7fb528ab4a0858aaab5cdda2dc6462433 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ishizaka, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  4 05:46:50 np0005545273 systemd[1]: Started libpod-conmon-fc3aa1b0bd35e986d3355347b9595ca7fb528ab4a0858aaab5cdda2dc6462433.scope.
Dec  4 05:46:50 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:46:50 np0005545273 podman[256248]: 2025-12-04 10:46:49.997035084 +0000 UTC m=+0.024465150 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:46:50 np0005545273 podman[256248]: 2025-12-04 10:46:50.10812922 +0000 UTC m=+0.135559306 container init fc3aa1b0bd35e986d3355347b9595ca7fb528ab4a0858aaab5cdda2dc6462433 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec  4 05:46:50 np0005545273 podman[256248]: 2025-12-04 10:46:50.116858205 +0000 UTC m=+0.144288271 container start fc3aa1b0bd35e986d3355347b9595ca7fb528ab4a0858aaab5cdda2dc6462433 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ishizaka, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  4 05:46:50 np0005545273 podman[256248]: 2025-12-04 10:46:50.120910814 +0000 UTC m=+0.148340880 container attach fc3aa1b0bd35e986d3355347b9595ca7fb528ab4a0858aaab5cdda2dc6462433 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ishizaka, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:46:50 np0005545273 recursing_ishizaka[256265]: 167 167
Dec  4 05:46:50 np0005545273 systemd[1]: libpod-fc3aa1b0bd35e986d3355347b9595ca7fb528ab4a0858aaab5cdda2dc6462433.scope: Deactivated successfully.
Dec  4 05:46:50 np0005545273 podman[256248]: 2025-12-04 10:46:50.12319915 +0000 UTC m=+0.150629186 container died fc3aa1b0bd35e986d3355347b9595ca7fb528ab4a0858aaab5cdda2dc6462433 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ishizaka, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec  4 05:46:50 np0005545273 systemd[1]: var-lib-containers-storage-overlay-56cf2fb29970a64f3a11b95fe7b94b8f929b284cc2fd10e0aa9d10c5a9f7eac4-merged.mount: Deactivated successfully.
Dec  4 05:46:50 np0005545273 podman[256248]: 2025-12-04 10:46:50.16883339 +0000 UTC m=+0.196263426 container remove fc3aa1b0bd35e986d3355347b9595ca7fb528ab4a0858aaab5cdda2dc6462433 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ishizaka, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  4 05:46:50 np0005545273 systemd[1]: libpod-conmon-fc3aa1b0bd35e986d3355347b9595ca7fb528ab4a0858aaab5cdda2dc6462433.scope: Deactivated successfully.
Dec  4 05:46:50 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:46:50 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf, vol_name:cephfs) < ""
Dec  4 05:46:50 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf/cf716c94-9722-4ff4-9497-36c129aaac2e'.
Dec  4 05:46:50 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf/.meta.tmp'
Dec  4 05:46:50 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf/.meta.tmp' to config b'/volumes/_nogroup/1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf/.meta'
Dec  4 05:46:50 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf, vol_name:cephfs) < ""
Dec  4 05:46:50 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf", "format": "json"}]: dispatch
Dec  4 05:46:50 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf, vol_name:cephfs) < ""
Dec  4 05:46:50 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf, vol_name:cephfs) < ""
Dec  4 05:46:50 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:46:50 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:46:50 np0005545273 podman[256289]: 2025-12-04 10:46:50.358504503 +0000 UTC m=+0.056130788 container create ed4553c76bee8e0638cf154353a50370e8416e53bed6ea39b4b854fb1ff7f6c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_elgamal, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec  4 05:46:50 np0005545273 systemd[1]: Started libpod-conmon-ed4553c76bee8e0638cf154353a50370e8416e53bed6ea39b4b854fb1ff7f6c1.scope.
Dec  4 05:46:50 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:46:50 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd35df53726fe6e0bf2879ff76b70556ffd97d51518ff4f6992ed56c85d38060/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:46:50 np0005545273 podman[256289]: 2025-12-04 10:46:50.339740623 +0000 UTC m=+0.037366928 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:46:50 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd35df53726fe6e0bf2879ff76b70556ffd97d51518ff4f6992ed56c85d38060/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:46:50 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd35df53726fe6e0bf2879ff76b70556ffd97d51518ff4f6992ed56c85d38060/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:46:50 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd35df53726fe6e0bf2879ff76b70556ffd97d51518ff4f6992ed56c85d38060/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:46:50 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd35df53726fe6e0bf2879ff76b70556ffd97d51518ff4f6992ed56c85d38060/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:46:50 np0005545273 podman[256289]: 2025-12-04 10:46:50.456711882 +0000 UTC m=+0.154338177 container init ed4553c76bee8e0638cf154353a50370e8416e53bed6ea39b4b854fb1ff7f6c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_elgamal, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:46:50 np0005545273 podman[256289]: 2025-12-04 10:46:50.464396411 +0000 UTC m=+0.162022696 container start ed4553c76bee8e0638cf154353a50370e8416e53bed6ea39b4b854fb1ff7f6c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_elgamal, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec  4 05:46:50 np0005545273 podman[256289]: 2025-12-04 10:46:50.468376549 +0000 UTC m=+0.166002864 container attach ed4553c76bee8e0638cf154353a50370e8416e53bed6ea39b4b854fb1ff7f6c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  4 05:46:50 np0005545273 nova_compute[244644]: 2025-12-04 10:46:50.814 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:46:50 np0005545273 nova_compute[244644]: 2025-12-04 10:46:50.815 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:46:50 np0005545273 nova_compute[244644]: 2025-12-04 10:46:50.815 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:46:50 np0005545273 clever_elgamal[256305]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:46:50 np0005545273 clever_elgamal[256305]: --> All data devices are unavailable
Dec  4 05:46:50 np0005545273 systemd[1]: libpod-ed4553c76bee8e0638cf154353a50370e8416e53bed6ea39b4b854fb1ff7f6c1.scope: Deactivated successfully.
Dec  4 05:46:50 np0005545273 podman[256289]: 2025-12-04 10:46:50.996597488 +0000 UTC m=+0.694223773 container died ed4553c76bee8e0638cf154353a50370e8416e53bed6ea39b4b854fb1ff7f6c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_elgamal, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:46:51 np0005545273 systemd[1]: var-lib-containers-storage-overlay-dd35df53726fe6e0bf2879ff76b70556ffd97d51518ff4f6992ed56c85d38060-merged.mount: Deactivated successfully.
Dec  4 05:46:51 np0005545273 podman[256289]: 2025-12-04 10:46:51.052138631 +0000 UTC m=+0.749764916 container remove ed4553c76bee8e0638cf154353a50370e8416e53bed6ea39b4b854fb1ff7f6c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_elgamal, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec  4 05:46:51 np0005545273 systemd[1]: libpod-conmon-ed4553c76bee8e0638cf154353a50370e8416e53bed6ea39b4b854fb1ff7f6c1.scope: Deactivated successfully.
Dec  4 05:46:51 np0005545273 nova_compute[244644]: 2025-12-04 10:46:51.334 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:46:51 np0005545273 nova_compute[244644]: 2025-12-04 10:46:51.337 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:46:51 np0005545273 nova_compute[244644]: 2025-12-04 10:46:51.337 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  4 05:46:51 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1115: 321 pgs: 321 active+clean; 66 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 94 KiB/s wr, 8 op/s
Dec  4 05:46:51 np0005545273 podman[256401]: 2025-12-04 10:46:51.558994587 +0000 UTC m=+0.047045696 container create 1a2cf44ee9a9da5c3c41c8e1f080576d6bc77407c2670b878f9461fc6157cc28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_cerf, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  4 05:46:51 np0005545273 systemd[1]: Started libpod-conmon-1a2cf44ee9a9da5c3c41c8e1f080576d6bc77407c2670b878f9461fc6157cc28.scope.
Dec  4 05:46:51 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:46:51 np0005545273 podman[256401]: 2025-12-04 10:46:51.53347497 +0000 UTC m=+0.021526059 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:46:51 np0005545273 podman[256401]: 2025-12-04 10:46:51.645249532 +0000 UTC m=+0.133300621 container init 1a2cf44ee9a9da5c3c41c8e1f080576d6bc77407c2670b878f9461fc6157cc28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec  4 05:46:51 np0005545273 podman[256401]: 2025-12-04 10:46:51.654206533 +0000 UTC m=+0.142257602 container start 1a2cf44ee9a9da5c3c41c8e1f080576d6bc77407c2670b878f9461fc6157cc28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec  4 05:46:51 np0005545273 hungry_cerf[256417]: 167 167
Dec  4 05:46:51 np0005545273 podman[256401]: 2025-12-04 10:46:51.658127358 +0000 UTC m=+0.146178427 container attach 1a2cf44ee9a9da5c3c41c8e1f080576d6bc77407c2670b878f9461fc6157cc28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_cerf, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:46:51 np0005545273 systemd[1]: libpod-1a2cf44ee9a9da5c3c41c8e1f080576d6bc77407c2670b878f9461fc6157cc28.scope: Deactivated successfully.
Dec  4 05:46:51 np0005545273 podman[256401]: 2025-12-04 10:46:51.661900931 +0000 UTC m=+0.149952020 container died 1a2cf44ee9a9da5c3c41c8e1f080576d6bc77407c2670b878f9461fc6157cc28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_cerf, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:46:51 np0005545273 systemd[1]: var-lib-containers-storage-overlay-56ec130938897b4da6ebf26143b5ee86357d3f63f1d0e17b4a100521e7655d2b-merged.mount: Deactivated successfully.
Dec  4 05:46:51 np0005545273 podman[256401]: 2025-12-04 10:46:51.704919657 +0000 UTC m=+0.192970726 container remove 1a2cf44ee9a9da5c3c41c8e1f080576d6bc77407c2670b878f9461fc6157cc28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_cerf, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  4 05:46:51 np0005545273 systemd[1]: libpod-conmon-1a2cf44ee9a9da5c3c41c8e1f080576d6bc77407c2670b878f9461fc6157cc28.scope: Deactivated successfully.
Dec  4 05:46:51 np0005545273 podman[256440]: 2025-12-04 10:46:51.872354474 +0000 UTC m=+0.030642472 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:46:52 np0005545273 podman[256440]: 2025-12-04 10:46:52.136458864 +0000 UTC m=+0.294746842 container create a6038420de14b77798aea175e8f8baec974ab6872b1619e055189830264a61aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_payne, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:46:52 np0005545273 systemd[1]: Started libpod-conmon-a6038420de14b77798aea175e8f8baec974ab6872b1619e055189830264a61aa.scope.
Dec  4 05:46:52 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:46:52 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94a4606c621cb6ff0b885afee600630370fc89d160baa54b3a646f07f34c9f22/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:46:52 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94a4606c621cb6ff0b885afee600630370fc89d160baa54b3a646f07f34c9f22/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:46:52 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94a4606c621cb6ff0b885afee600630370fc89d160baa54b3a646f07f34c9f22/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:46:52 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94a4606c621cb6ff0b885afee600630370fc89d160baa54b3a646f07f34c9f22/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:46:52 np0005545273 podman[256440]: 2025-12-04 10:46:52.282607869 +0000 UTC m=+0.440895847 container init a6038420de14b77798aea175e8f8baec974ab6872b1619e055189830264a61aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec  4 05:46:52 np0005545273 podman[256440]: 2025-12-04 10:46:52.291764404 +0000 UTC m=+0.450052372 container start a6038420de14b77798aea175e8f8baec974ab6872b1619e055189830264a61aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:46:52 np0005545273 podman[256440]: 2025-12-04 10:46:52.298834828 +0000 UTC m=+0.457122856 container attach a6038420de14b77798aea175e8f8baec974ab6872b1619e055189830264a61aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_payne, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]: {
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:    "0": [
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:        {
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            "devices": [
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "/dev/loop3"
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            ],
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            "lv_name": "ceph_lv0",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            "lv_size": "21470642176",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            "name": "ceph_lv0",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            "tags": {
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.cluster_name": "ceph",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.crush_device_class": "",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.encrypted": "0",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.objectstore": "bluestore",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.osd_id": "0",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.type": "block",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.vdo": "0",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.with_tpm": "0"
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            },
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            "type": "block",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            "vg_name": "ceph_vg0"
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:        }
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:    ],
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:    "1": [
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:        {
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            "devices": [
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "/dev/loop4"
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            ],
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            "lv_name": "ceph_lv1",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            "lv_size": "21470642176",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            "name": "ceph_lv1",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            "tags": {
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.cluster_name": "ceph",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.crush_device_class": "",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.encrypted": "0",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.objectstore": "bluestore",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.osd_id": "1",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.type": "block",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.vdo": "0",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.with_tpm": "0"
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            },
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            "type": "block",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            "vg_name": "ceph_vg1"
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:        }
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:    ],
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:    "2": [
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:        {
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            "devices": [
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "/dev/loop5"
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            ],
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            "lv_name": "ceph_lv2",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            "lv_size": "21470642176",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            "name": "ceph_lv2",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            "tags": {
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.cluster_name": "ceph",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.crush_device_class": "",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.encrypted": "0",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.objectstore": "bluestore",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.osd_id": "2",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.type": "block",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.vdo": "0",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:                "ceph.with_tpm": "0"
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            },
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            "type": "block",
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:            "vg_name": "ceph_vg2"
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:        }
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]:    ]
Dec  4 05:46:52 np0005545273 wonderful_payne[256457]: }
Dec  4 05:46:52 np0005545273 systemd[1]: libpod-a6038420de14b77798aea175e8f8baec974ab6872b1619e055189830264a61aa.scope: Deactivated successfully.
Dec  4 05:46:52 np0005545273 podman[256440]: 2025-12-04 10:46:52.653339125 +0000 UTC m=+0.811627093 container died a6038420de14b77798aea175e8f8baec974ab6872b1619e055189830264a61aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  4 05:46:52 np0005545273 systemd[1]: var-lib-containers-storage-overlay-94a4606c621cb6ff0b885afee600630370fc89d160baa54b3a646f07f34c9f22-merged.mount: Deactivated successfully.
Dec  4 05:46:52 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec  4 05:46:52 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:46:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec  4 05:46:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec  4 05:46:53 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice_bob with tenant 7df6681d57a74b90abc5310588588b91
Dec  4 05:46:53 np0005545273 podman[256440]: 2025-12-04 10:46:53.105767005 +0000 UTC m=+1.264054973 container remove a6038420de14b77798aea175e8f8baec974ab6872b1619e055189830264a61aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec  4 05:46:53 np0005545273 systemd[1]: libpod-conmon-a6038420de14b77798aea175e8f8baec974ab6872b1619e055189830264a61aa.scope: Deactivated successfully.
Dec  4 05:46:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:46:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:46:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:46:53 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:46:53 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1116: 321 pgs: 321 active+clean; 67 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 121 KiB/s wr, 11 op/s
Dec  4 05:46:53 np0005545273 podman[256541]: 2025-12-04 10:46:53.632784036 +0000 UTC m=+0.045041566 container create a1f180f4ff6b89d8df32175be465735c1302379650598ac72c64febe6c7327ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:46:53 np0005545273 systemd[1]: Started libpod-conmon-a1f180f4ff6b89d8df32175be465735c1302379650598ac72c64febe6c7327ac.scope.
Dec  4 05:46:53 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:46:53 np0005545273 podman[256541]: 2025-12-04 10:46:53.613570004 +0000 UTC m=+0.025827524 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:46:53 np0005545273 podman[256541]: 2025-12-04 10:46:53.714244134 +0000 UTC m=+0.126501614 container init a1f180f4ff6b89d8df32175be465735c1302379650598ac72c64febe6c7327ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  4 05:46:53 np0005545273 podman[256541]: 2025-12-04 10:46:53.722773764 +0000 UTC m=+0.135031254 container start a1f180f4ff6b89d8df32175be465735c1302379650598ac72c64febe6c7327ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec  4 05:46:53 np0005545273 podman[256541]: 2025-12-04 10:46:53.726363452 +0000 UTC m=+0.138620952 container attach a1f180f4ff6b89d8df32175be465735c1302379650598ac72c64febe6c7327ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec  4 05:46:53 np0005545273 relaxed_proskuriakova[256557]: 167 167
Dec  4 05:46:53 np0005545273 systemd[1]: libpod-a1f180f4ff6b89d8df32175be465735c1302379650598ac72c64febe6c7327ac.scope: Deactivated successfully.
Dec  4 05:46:53 np0005545273 podman[256541]: 2025-12-04 10:46:53.72996115 +0000 UTC m=+0.142218660 container died a1f180f4ff6b89d8df32175be465735c1302379650598ac72c64febe6c7327ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:46:53 np0005545273 systemd[1]: var-lib-containers-storage-overlay-d94a08611d597ad99cac06c455de2d3f773de8ed7ae84a442e65d2640b9c8c05-merged.mount: Deactivated successfully.
Dec  4 05:46:53 np0005545273 podman[256541]: 2025-12-04 10:46:53.777632839 +0000 UTC m=+0.189890349 container remove a1f180f4ff6b89d8df32175be465735c1302379650598ac72c64febe6c7327ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:46:53 np0005545273 systemd[1]: libpod-conmon-a1f180f4ff6b89d8df32175be465735c1302379650598ac72c64febe6c7327ac.scope: Deactivated successfully.
Dec  4 05:46:53 np0005545273 podman[256581]: 2025-12-04 10:46:53.955740419 +0000 UTC m=+0.048936252 container create 61c27bf152a95a5061f9724e774a433e44bdc094cd458d1a3872c0192b87da8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shockley, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  4 05:46:53 np0005545273 systemd[1]: Started libpod-conmon-61c27bf152a95a5061f9724e774a433e44bdc094cd458d1a3872c0192b87da8e.scope.
Dec  4 05:46:54 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:46:54 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64f5b751ab5ad7fd4e0200294289b843ad02cb1685420a9b89f8c02d477b2646/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:46:54 np0005545273 podman[256581]: 2025-12-04 10:46:53.933936105 +0000 UTC m=+0.027131988 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:46:54 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64f5b751ab5ad7fd4e0200294289b843ad02cb1685420a9b89f8c02d477b2646/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:46:54 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64f5b751ab5ad7fd4e0200294289b843ad02cb1685420a9b89f8c02d477b2646/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:46:54 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64f5b751ab5ad7fd4e0200294289b843ad02cb1685420a9b89f8c02d477b2646/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:46:54 np0005545273 podman[256581]: 2025-12-04 10:46:54.0446084 +0000 UTC m=+0.137804273 container init 61c27bf152a95a5061f9724e774a433e44bdc094cd458d1a3872c0192b87da8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  4 05:46:54 np0005545273 podman[256581]: 2025-12-04 10:46:54.051743565 +0000 UTC m=+0.144939398 container start 61c27bf152a95a5061f9724e774a433e44bdc094cd458d1a3872c0192b87da8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shockley, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  4 05:46:54 np0005545273 podman[256581]: 2025-12-04 10:46:54.055906207 +0000 UTC m=+0.149102070 container attach 61c27bf152a95a5061f9724e774a433e44bdc094cd458d1a3872c0192b87da8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shockley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec  4 05:46:54 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec  4 05:46:54 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:46:54 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:46:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:46:54 np0005545273 lvm[256676]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:46:54 np0005545273 lvm[256677]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:46:54 np0005545273 lvm[256677]: VG ceph_vg1 finished
Dec  4 05:46:54 np0005545273 lvm[256676]: VG ceph_vg0 finished
Dec  4 05:46:54 np0005545273 lvm[256679]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:46:54 np0005545273 lvm[256679]: VG ceph_vg2 finished
Dec  4 05:46:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:46:54.914 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:46:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:46:54.916 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:46:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:46:54.916 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:46:54 np0005545273 stoic_shockley[256598]: {}
Dec  4 05:46:55 np0005545273 systemd[1]: libpod-61c27bf152a95a5061f9724e774a433e44bdc094cd458d1a3872c0192b87da8e.scope: Deactivated successfully.
Dec  4 05:46:55 np0005545273 systemd[1]: libpod-61c27bf152a95a5061f9724e774a433e44bdc094cd458d1a3872c0192b87da8e.scope: Consumed 1.528s CPU time.
Dec  4 05:46:55 np0005545273 podman[256581]: 2025-12-04 10:46:55.018891344 +0000 UTC m=+1.112087237 container died 61c27bf152a95a5061f9724e774a433e44bdc094cd458d1a3872c0192b87da8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shockley, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  4 05:46:55 np0005545273 systemd[1]: var-lib-containers-storage-overlay-64f5b751ab5ad7fd4e0200294289b843ad02cb1685420a9b89f8c02d477b2646-merged.mount: Deactivated successfully.
Dec  4 05:46:55 np0005545273 podman[256581]: 2025-12-04 10:46:55.077267265 +0000 UTC m=+1.170463138 container remove 61c27bf152a95a5061f9724e774a433e44bdc094cd458d1a3872c0192b87da8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shockley, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Dec  4 05:46:55 np0005545273 systemd[1]: libpod-conmon-61c27bf152a95a5061f9724e774a433e44bdc094cd458d1a3872c0192b87da8e.scope: Deactivated successfully.
Dec  4 05:46:55 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:46:55 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf", "format": "json"}]: dispatch
Dec  4 05:46:55 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:46:55 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:46:55 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf' of type subvolume
Dec  4 05:46:55 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:46:55.163+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf' of type subvolume
Dec  4 05:46:55 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf", "force": true, "format": "json"}]: dispatch
Dec  4 05:46:55 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf, vol_name:cephfs) < ""
Dec  4 05:46:55 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:46:55 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:46:55 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf'' moved to trashcan
Dec  4 05:46:55 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:46:55 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1ae4ad25-1c0a-4f97-a54f-7e86dadb91cf, vol_name:cephfs) < ""
Dec  4 05:46:55 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:46:55 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1117: 321 pgs: 321 active+clean; 67 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 66 KiB/s wr, 6 op/s
Dec  4 05:46:56 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:46:56 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:46:56 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec  4 05:46:56 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:46:57 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec  4 05:46:57 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec  4 05:46:57 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Dec  4 05:46:57 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec  4 05:46:57 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec  4 05:46:57 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:46:57 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec  4 05:46:57 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:46:57 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec  4 05:46:57 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec  4 05:46:57 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:46:57 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:46:57 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec  4 05:46:57 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec  4 05:46:57 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec  4 05:46:57 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1118: 321 pgs: 321 active+clean; 67 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 124 KiB/s wr, 86 op/s
Dec  4 05:46:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:46:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:46:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:46:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:46:58 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cbd234cb-faf5-4e19-a1b6-ca47791b1043", "format": "json"}]: dispatch
Dec  4 05:46:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:cbd234cb-faf5-4e19-a1b6-ca47791b1043, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:46:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:cbd234cb-faf5-4e19-a1b6-ca47791b1043, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:46:58 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cbd234cb-faf5-4e19-a1b6-ca47791b1043", "force": true, "format": "json"}]: dispatch
Dec  4 05:46:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cbd234cb-faf5-4e19-a1b6-ca47791b1043, vol_name:cephfs) < ""
Dec  4 05:46:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/cbd234cb-faf5-4e19-a1b6-ca47791b1043'' moved to trashcan
Dec  4 05:46:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:46:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cbd234cb-faf5-4e19-a1b6-ca47791b1043, vol_name:cephfs) < ""
Dec  4 05:46:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:46:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:46:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:46:59 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1119: 321 pgs: 321 active+clean; 67 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 86 KiB/s wr, 83 op/s
Dec  4 05:47:00 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "r", "format": "json"}]: dispatch
Dec  4 05:47:00 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:47:00 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec  4 05:47:00 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec  4 05:47:00 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice_bob with tenant 7df6681d57a74b90abc5310588588b91
Dec  4 05:47:00 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:47:00 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:47:00 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:47:00 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:47:01 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec  4 05:47:01 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:47:01 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:47:01 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1120: 321 pgs: 321 active+clean; 67 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 86 KiB/s wr, 84 op/s
Dec  4 05:47:01 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "134bada8-f9d1-4734-8cb9-4d8f094ffc02", "snap_name": "eb21836e-156d-4fd6-adb6-75fc9fe014e2_2d147d3e-2b60-4d32-b534-bde0f2f0f206", "force": true, "format": "json"}]: dispatch
Dec  4 05:47:01 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eb21836e-156d-4fd6-adb6-75fc9fe014e2_2d147d3e-2b60-4d32-b534-bde0f2f0f206, sub_name:134bada8-f9d1-4734-8cb9-4d8f094ffc02, vol_name:cephfs) < ""
Dec  4 05:47:01 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/134bada8-f9d1-4734-8cb9-4d8f094ffc02/.meta.tmp'
Dec  4 05:47:01 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/134bada8-f9d1-4734-8cb9-4d8f094ffc02/.meta.tmp' to config b'/volumes/_nogroup/134bada8-f9d1-4734-8cb9-4d8f094ffc02/.meta'
Dec  4 05:47:01 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eb21836e-156d-4fd6-adb6-75fc9fe014e2_2d147d3e-2b60-4d32-b534-bde0f2f0f206, sub_name:134bada8-f9d1-4734-8cb9-4d8f094ffc02, vol_name:cephfs) < ""
Dec  4 05:47:01 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "134bada8-f9d1-4734-8cb9-4d8f094ffc02", "snap_name": "eb21836e-156d-4fd6-adb6-75fc9fe014e2", "force": true, "format": "json"}]: dispatch
Dec  4 05:47:01 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eb21836e-156d-4fd6-adb6-75fc9fe014e2, sub_name:134bada8-f9d1-4734-8cb9-4d8f094ffc02, vol_name:cephfs) < ""
Dec  4 05:47:01 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/134bada8-f9d1-4734-8cb9-4d8f094ffc02/.meta.tmp'
Dec  4 05:47:01 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/134bada8-f9d1-4734-8cb9-4d8f094ffc02/.meta.tmp' to config b'/volumes/_nogroup/134bada8-f9d1-4734-8cb9-4d8f094ffc02/.meta'
Dec  4 05:47:01 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eb21836e-156d-4fd6-adb6-75fc9fe014e2, sub_name:134bada8-f9d1-4734-8cb9-4d8f094ffc02, vol_name:cephfs) < ""
Dec  4 05:47:03 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1121: 321 pgs: 321 active+clean; 68 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 132 KiB/s wr, 88 op/s
Dec  4 05:47:03 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec  4 05:47:03 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:47:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec  4 05:47:04 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec  4 05:47:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Dec  4 05:47:04 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec  4 05:47:04 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec  4 05:47:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:47:04 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec  4 05:47:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:47:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec  4 05:47:04 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec  4 05:47:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:47:04 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:47:04 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec  4 05:47:04 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec  4 05:47:04 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec  4 05:47:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:47:05 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "134bada8-f9d1-4734-8cb9-4d8f094ffc02", "format": "json"}]: dispatch
Dec  4 05:47:05 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:134bada8-f9d1-4734-8cb9-4d8f094ffc02, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:47:05 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:134bada8-f9d1-4734-8cb9-4d8f094ffc02, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:47:05 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:47:05.286+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '134bada8-f9d1-4734-8cb9-4d8f094ffc02' of type subvolume
Dec  4 05:47:05 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '134bada8-f9d1-4734-8cb9-4d8f094ffc02' of type subvolume
Dec  4 05:47:05 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "134bada8-f9d1-4734-8cb9-4d8f094ffc02", "force": true, "format": "json"}]: dispatch
Dec  4 05:47:05 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:134bada8-f9d1-4734-8cb9-4d8f094ffc02, vol_name:cephfs) < ""
Dec  4 05:47:05 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/134bada8-f9d1-4734-8cb9-4d8f094ffc02'' moved to trashcan
Dec  4 05:47:05 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:47:05 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:134bada8-f9d1-4734-8cb9-4d8f094ffc02, vol_name:cephfs) < ""
Dec  4 05:47:05 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1122: 321 pgs: 321 active+clean; 68 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 104 KiB/s wr, 84 op/s
Dec  4 05:47:06 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Dec  4 05:47:06 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Dec  4 05:47:06 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Dec  4 05:47:07 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec  4 05:47:07 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:47:07 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec  4 05:47:07 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec  4 05:47:07 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice bob with tenant 7df6681d57a74b90abc5310588588b91
Dec  4 05:47:07 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec  4 05:47:07 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:47:07 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:47:07 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:47:07 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:47:07 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1124: 321 pgs: 321 active+clean; 68 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 104 KiB/s wr, 10 op/s
Dec  4 05:47:08 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:47:08 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:47:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:47:09 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1125: 321 pgs: 321 active+clean; 68 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 104 KiB/s wr, 10 op/s
Dec  4 05:47:09 np0005545273 podman[256722]: 2025-12-04 10:47:09.955469525 +0000 UTC m=+0.061162332 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  4 05:47:09 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:47:09.957 156095 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'aa:78:67', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:d2:c7:24:ee:78'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  4 05:47:09 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:47:09.959 156095 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  4 05:47:11 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec  4 05:47:11 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:47:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec  4 05:47:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec  4 05:47:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Dec  4 05:47:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec  4 05:47:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec  4 05:47:11 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:47:11 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec  4 05:47:11 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:47:11 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec  4 05:47:11 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec  4 05:47:11 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:47:11 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:47:11 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec  4 05:47:11 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec  4 05:47:11 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec  4 05:47:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  4 05:47:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4128545697' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec  4 05:47:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  4 05:47:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4128545697' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec  4 05:47:11 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1126: 321 pgs: 321 active+clean; 68 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 104 KiB/s wr, 10 op/s
Dec  4 05:47:13 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1127: 321 pgs: 321 active+clean; 69 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 100 KiB/s wr, 9 op/s
Dec  4 05:47:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:47:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Dec  4 05:47:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Dec  4 05:47:14 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Dec  4 05:47:14 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "r", "format": "json"}]: dispatch
Dec  4 05:47:14 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:47:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec  4 05:47:14 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec  4 05:47:14 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID alice bob with tenant 7df6681d57a74b90abc5310588588b91
Dec  4 05:47:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:47:14 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:47:14 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:47:14 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:47:15 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec  4 05:47:15 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:47:15 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:47:15 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1129: 321 pgs: 321 active+clean; 69 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 447 B/s rd, 109 KiB/s wr, 9 op/s
Dec  4 05:47:17 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1130: 321 pgs: 321 active+clean; 69 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 80 KiB/s wr, 6 op/s
Dec  4 05:47:17 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "997ad407-3986-4029-acca-2f53511b4ff3", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:47:17 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:997ad407-3986-4029-acca-2f53511b4ff3, vol_name:cephfs) < ""
Dec  4 05:47:17 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/997ad407-3986-4029-acca-2f53511b4ff3/13d4aaa8-c75f-4995-b55e-e3eaac7e47b3'.
Dec  4 05:47:17 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/997ad407-3986-4029-acca-2f53511b4ff3/.meta.tmp'
Dec  4 05:47:17 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/997ad407-3986-4029-acca-2f53511b4ff3/.meta.tmp' to config b'/volumes/_nogroup/997ad407-3986-4029-acca-2f53511b4ff3/.meta'
Dec  4 05:47:17 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:997ad407-3986-4029-acca-2f53511b4ff3, vol_name:cephfs) < ""
Dec  4 05:47:18 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "997ad407-3986-4029-acca-2f53511b4ff3", "format": "json"}]: dispatch
Dec  4 05:47:18 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:997ad407-3986-4029-acca-2f53511b4ff3, vol_name:cephfs) < ""
Dec  4 05:47:18 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:997ad407-3986-4029-acca-2f53511b4ff3, vol_name:cephfs) < ""
Dec  4 05:47:18 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:47:18 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:47:18 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec  4 05:47:18 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:47:18 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec  4 05:47:18 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec  4 05:47:18 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Dec  4 05:47:18 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec  4 05:47:18 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec  4 05:47:18 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:47:18 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec  4 05:47:18 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:47:18 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec  4 05:47:18 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec  4 05:47:18 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:47:18 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:47:18 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec  4 05:47:18 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec  4 05:47:18 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec  4 05:47:18 np0005545273 podman[256746]: 2025-12-04 10:47:18.947589721 +0000 UTC m=+0.046213655 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Dec  4 05:47:18 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:47:18.961 156095 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=565580d5-3422-4e11-b563-3f1a3db67238, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  4 05:47:18 np0005545273 podman[256745]: 2025-12-04 10:47:18.987242913 +0000 UTC m=+0.087918618 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Dec  4 05:47:19 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:47:19 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1131: 321 pgs: 321 active+clean; 69 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 80 KiB/s wr, 6 op/s
Dec  4 05:47:21 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "997ad407-3986-4029-acca-2f53511b4ff3", "snap_name": "04fc09fb-6351-40d6-a158-b6c8dd071066", "format": "json"}]: dispatch
Dec  4 05:47:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:04fc09fb-6351-40d6-a158-b6c8dd071066, sub_name:997ad407-3986-4029-acca-2f53511b4ff3, vol_name:cephfs) < ""
Dec  4 05:47:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:04fc09fb-6351-40d6-a158-b6c8dd071066, sub_name:997ad407-3986-4029-acca-2f53511b4ff3, vol_name:cephfs) < ""
Dec  4 05:47:21 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1132: 321 pgs: 321 active+clean; 69 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 80 KiB/s wr, 6 op/s
Dec  4 05:47:21 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec  4 05:47:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:47:21 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Dec  4 05:47:21 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec  4 05:47:21 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: Creating meta for ID bob with tenant 7df6681d57a74b90abc5310588588b91
Dec  4 05:47:21 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} v 0)
Dec  4 05:47:21 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:47:21 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:47:21 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:47:22 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec  4 05:47:22 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"} : dispatch
Dec  4 05:47:22 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939", "mon", "allow r"], "format": "json"}]': finished
Dec  4 05:47:23 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1133: 321 pgs: 321 active+clean; 70 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 91 KiB/s wr, 7 op/s
Dec  4 05:47:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:47:24 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "997ad407-3986-4029-acca-2f53511b4ff3", "snap_name": "04fc09fb-6351-40d6-a158-b6c8dd071066_2a65533f-01dd-4708-9c72-21da27bce3f8", "force": true, "format": "json"}]: dispatch
Dec  4 05:47:24 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:04fc09fb-6351-40d6-a158-b6c8dd071066_2a65533f-01dd-4708-9c72-21da27bce3f8, sub_name:997ad407-3986-4029-acca-2f53511b4ff3, vol_name:cephfs) < ""
Dec  4 05:47:24 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/997ad407-3986-4029-acca-2f53511b4ff3/.meta.tmp'
Dec  4 05:47:24 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/997ad407-3986-4029-acca-2f53511b4ff3/.meta.tmp' to config b'/volumes/_nogroup/997ad407-3986-4029-acca-2f53511b4ff3/.meta'
Dec  4 05:47:24 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:04fc09fb-6351-40d6-a158-b6c8dd071066_2a65533f-01dd-4708-9c72-21da27bce3f8, sub_name:997ad407-3986-4029-acca-2f53511b4ff3, vol_name:cephfs) < ""
Dec  4 05:47:25 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "997ad407-3986-4029-acca-2f53511b4ff3", "snap_name": "04fc09fb-6351-40d6-a158-b6c8dd071066", "force": true, "format": "json"}]: dispatch
Dec  4 05:47:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:04fc09fb-6351-40d6-a158-b6c8dd071066, sub_name:997ad407-3986-4029-acca-2f53511b4ff3, vol_name:cephfs) < ""
Dec  4 05:47:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/997ad407-3986-4029-acca-2f53511b4ff3/.meta.tmp'
Dec  4 05:47:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/997ad407-3986-4029-acca-2f53511b4ff3/.meta.tmp' to config b'/volumes/_nogroup/997ad407-3986-4029-acca-2f53511b4ff3/.meta'
Dec  4 05:47:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:04fc09fb-6351-40d6-a158-b6c8dd071066, sub_name:997ad407-3986-4029-acca-2f53511b4ff3, vol_name:cephfs) < ""
Dec  4 05:47:25 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1134: 321 pgs: 321 active+clean; 70 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 92 B/s rd, 82 KiB/s wr, 7 op/s
Dec  4 05:47:26 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e36f2012-530d-4132-9482-586618cf68e8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:47:26 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e36f2012-530d-4132-9482-586618cf68e8, vol_name:cephfs) < ""
Dec  4 05:47:26 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/e36f2012-530d-4132-9482-586618cf68e8/db8ca860-bdb0-4174-90be-94c2d9735d77'.
Dec  4 05:47:26 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e36f2012-530d-4132-9482-586618cf68e8/.meta.tmp'
Dec  4 05:47:26 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e36f2012-530d-4132-9482-586618cf68e8/.meta.tmp' to config b'/volumes/_nogroup/e36f2012-530d-4132-9482-586618cf68e8/.meta'
Dec  4 05:47:26 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e36f2012-530d-4132-9482-586618cf68e8, vol_name:cephfs) < ""
Dec  4 05:47:26 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e36f2012-530d-4132-9482-586618cf68e8", "format": "json"}]: dispatch
Dec  4 05:47:26 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e36f2012-530d-4132-9482-586618cf68e8, vol_name:cephfs) < ""
Dec  4 05:47:26 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e36f2012-530d-4132-9482-586618cf68e8, vol_name:cephfs) < ""
Dec  4 05:47:26 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:47:26 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:47:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:47:26
Dec  4 05:47:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:47:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:47:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'images', 'default.rgw.control', 'vms', 'backups', '.rgw.root']
Dec  4 05:47:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:47:27 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1135: 321 pgs: 321 active+clean; 70 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 98 KiB/s wr, 9 op/s
Dec  4 05:47:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:47:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:47:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:47:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:47:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:47:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:47:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:47:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:47:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:47:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:47:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:47:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:47:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:47:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f8417ad2040>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f8413aa5580>)]
Dec  4 05:47:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec  4 05:47:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec  4 05:47:28 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "997ad407-3986-4029-acca-2f53511b4ff3", "format": "json"}]: dispatch
Dec  4 05:47:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:997ad407-3986-4029-acca-2f53511b4ff3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:47:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:997ad407-3986-4029-acca-2f53511b4ff3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:47:28 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '997ad407-3986-4029-acca-2f53511b4ff3' of type subvolume
Dec  4 05:47:28 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:47:28.443+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '997ad407-3986-4029-acca-2f53511b4ff3' of type subvolume
Dec  4 05:47:28 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "997ad407-3986-4029-acca-2f53511b4ff3", "force": true, "format": "json"}]: dispatch
Dec  4 05:47:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:997ad407-3986-4029-acca-2f53511b4ff3, vol_name:cephfs) < ""
Dec  4 05:47:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/997ad407-3986-4029-acca-2f53511b4ff3'' moved to trashcan
Dec  4 05:47:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:47:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:997ad407-3986-4029-acca-2f53511b4ff3, vol_name:cephfs) < ""
Dec  4 05:47:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:47:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f8417af2580>)]
Dec  4 05:47:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec  4 05:47:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:47:29 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1136: 321 pgs: 321 active+clean; 70 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 75 KiB/s wr, 7 op/s
Dec  4 05:47:29 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "e36f2012-530d-4132-9482-586618cf68e8", "auth_id": "bob", "tenant_id": "7df6681d57a74b90abc5310588588b91", "access_level": "rw", "format": "json"}]: dispatch
Dec  4 05:47:29 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:e36f2012-530d-4132-9482-586618cf68e8, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:47:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Dec  4 05:47:29 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec  4 05:47:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba,allow rw path=/volumes/_nogroup/e36f2012-530d-4132-9482-586618cf68e8/db8ca860-bdb0-4174-90be-94c2d9735d77", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939,allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_e36f2012-530d-4132-9482-586618cf68e8"]} v 0)
Dec  4 05:47:29 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba,allow rw path=/volumes/_nogroup/e36f2012-530d-4132-9482-586618cf68e8/db8ca860-bdb0-4174-90be-94c2d9735d77", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939,allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_e36f2012-530d-4132-9482-586618cf68e8"]} : dispatch
Dec  4 05:47:29 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba,allow rw path=/volumes/_nogroup/e36f2012-530d-4132-9482-586618cf68e8/db8ca860-bdb0-4174-90be-94c2d9735d77", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939,allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_e36f2012-530d-4132-9482-586618cf68e8"]}]': finished
Dec  4 05:47:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Dec  4 05:47:29 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec  4 05:47:29 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:e36f2012-530d-4132-9482-586618cf68e8, tenant_id:7df6681d57a74b90abc5310588588b91, vol_name:cephfs) < ""
Dec  4 05:47:29 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.iwufnj(active, since 33m)
Dec  4 05:47:29 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec  4 05:47:29 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba,allow rw path=/volumes/_nogroup/e36f2012-530d-4132-9482-586618cf68e8/db8ca860-bdb0-4174-90be-94c2d9735d77", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939,allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_e36f2012-530d-4132-9482-586618cf68e8"]} : dispatch
Dec  4 05:47:29 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba,allow rw path=/volumes/_nogroup/e36f2012-530d-4132-9482-586618cf68e8/db8ca860-bdb0-4174-90be-94c2d9735d77", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939,allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_e36f2012-530d-4132-9482-586618cf68e8"]}]': finished
Dec  4 05:47:29 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec  4 05:47:30 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Dec  4 05:47:30 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Dec  4 05:47:30 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Dec  4 05:47:31 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1138: 321 pgs: 321 active+clean; 70 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 90 KiB/s wr, 8 op/s
Dec  4 05:47:32 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "002e05aa-0dc4-4f1b-ba53-39cac0015b96", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:47:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:002e05aa-0dc4-4f1b-ba53-39cac0015b96, vol_name:cephfs) < ""
Dec  4 05:47:32 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/002e05aa-0dc4-4f1b-ba53-39cac0015b96/d0dd193d-277f-49c2-89da-22c500b1172f'.
Dec  4 05:47:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/002e05aa-0dc4-4f1b-ba53-39cac0015b96/.meta.tmp'
Dec  4 05:47:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/002e05aa-0dc4-4f1b-ba53-39cac0015b96/.meta.tmp' to config b'/volumes/_nogroup/002e05aa-0dc4-4f1b-ba53-39cac0015b96/.meta'
Dec  4 05:47:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:002e05aa-0dc4-4f1b-ba53-39cac0015b96, vol_name:cephfs) < ""
Dec  4 05:47:32 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "002e05aa-0dc4-4f1b-ba53-39cac0015b96", "format": "json"}]: dispatch
Dec  4 05:47:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:002e05aa-0dc4-4f1b-ba53-39cac0015b96, vol_name:cephfs) < ""
Dec  4 05:47:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:002e05aa-0dc4-4f1b-ba53-39cac0015b96, vol_name:cephfs) < ""
Dec  4 05:47:32 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:47:32 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:47:33 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "e36f2012-530d-4132-9482-586618cf68e8", "auth_id": "bob", "format": "json"}]: dispatch
Dec  4 05:47:33 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:e36f2012-530d-4132-9482-586618cf68e8, vol_name:cephfs) < ""
Dec  4 05:47:33 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Dec  4 05:47:33 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec  4 05:47:33 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939"]} v 0)
Dec  4 05:47:33 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939"]} : dispatch
Dec  4 05:47:33 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939"]}]': finished
Dec  4 05:47:33 np0005545273 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Dec  4 05:47:33 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:e36f2012-530d-4132-9482-586618cf68e8, vol_name:cephfs) < ""
Dec  4 05:47:33 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "e36f2012-530d-4132-9482-586618cf68e8", "auth_id": "bob", "format": "json"}]: dispatch
Dec  4 05:47:33 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:e36f2012-530d-4132-9482-586618cf68e8, vol_name:cephfs) < ""
Dec  4 05:47:33 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/e36f2012-530d-4132-9482-586618cf68e8/db8ca860-bdb0-4174-90be-94c2d9735d77
Dec  4 05:47:33 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=bob,client_metadata.root=/volumes/_nogroup/e36f2012-530d-4132-9482-586618cf68e8/db8ca860-bdb0-4174-90be-94c2d9735d77],prefix=session evict} (starting...)
Dec  4 05:47:33 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:47:33 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:e36f2012-530d-4132-9482-586618cf68e8, vol_name:cephfs) < ""
Dec  4 05:47:33 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1139: 321 pgs: 321 active+clean; 70 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 83 KiB/s wr, 8 op/s
Dec  4 05:47:33 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec  4 05:47:33 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939"]} : dispatch
Dec  4 05:47:33 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens___nogroup_1de31656-5fa1-4344-818a-900ef388b939"]}]': finished
Dec  4 05:47:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:47:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Dec  4 05:47:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Dec  4 05:47:34 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Dec  4 05:47:35 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1141: 321 pgs: 321 active+clean; 70 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 70 KiB/s wr, 6 op/s
Dec  4 05:47:35 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "002e05aa-0dc4-4f1b-ba53-39cac0015b96", "snap_name": "42c259ed-af7d-41af-a5f2-bcfbeccb5eab", "format": "json"}]: dispatch
Dec  4 05:47:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:42c259ed-af7d-41af-a5f2-bcfbeccb5eab, sub_name:002e05aa-0dc4-4f1b-ba53-39cac0015b96, vol_name:cephfs) < ""
Dec  4 05:47:36 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:42c259ed-af7d-41af-a5f2-bcfbeccb5eab, sub_name:002e05aa-0dc4-4f1b-ba53-39cac0015b96, vol_name:cephfs) < ""
Dec  4 05:47:36 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "bob", "format": "json"}]: dispatch
Dec  4 05:47:36 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:47:36 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Dec  4 05:47:36 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec  4 05:47:36 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.bob"} v 0)
Dec  4 05:47:36 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch
Dec  4 05:47:36 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished
Dec  4 05:47:36 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:47:36 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "auth_id": "bob", "format": "json"}]: dispatch
Dec  4 05:47:36 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:47:36 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba
Dec  4 05:47:36 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session evict {filters=[auth_name=bob,client_metadata.root=/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939/461ab4cd-86b6-4246-a5a8-55c5b0abe8ba],prefix=session evict} (starting...)
Dec  4 05:47:36 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec  4 05:47:36 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:47:37 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec  4 05:47:37 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch
Dec  4 05:47:37 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished
Dec  4 05:47:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:47:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:47:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:47:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:47:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:47:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:47:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:47:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:47:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:47:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:47:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006666797954154184 of space, bias 1.0, pg target 0.20000393862462554 quantized to 32 (current 32)
Dec  4 05:47:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:47:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.000475949403355812 of space, bias 4.0, pg target 0.5711392840269744 quantized to 16 (current 32)
Dec  4 05:47:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:47:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:47:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:47:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:47:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:47:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 05:47:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:47:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:47:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:47:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 05:47:37 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1142: 321 pgs: 321 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 148 KiB/s wr, 13 op/s
Dec  4 05:47:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:47:39 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1143: 321 pgs: 321 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 474 B/s rd, 137 KiB/s wr, 12 op/s
Dec  4 05:47:40 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "002e05aa-0dc4-4f1b-ba53-39cac0015b96", "snap_name": "42c259ed-af7d-41af-a5f2-bcfbeccb5eab_446616cd-30c9-420e-848d-bee94a3551ec", "force": true, "format": "json"}]: dispatch
Dec  4 05:47:40 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:42c259ed-af7d-41af-a5f2-bcfbeccb5eab_446616cd-30c9-420e-848d-bee94a3551ec, sub_name:002e05aa-0dc4-4f1b-ba53-39cac0015b96, vol_name:cephfs) < ""
Dec  4 05:47:40 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/002e05aa-0dc4-4f1b-ba53-39cac0015b96/.meta.tmp'
Dec  4 05:47:40 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/002e05aa-0dc4-4f1b-ba53-39cac0015b96/.meta.tmp' to config b'/volumes/_nogroup/002e05aa-0dc4-4f1b-ba53-39cac0015b96/.meta'
Dec  4 05:47:40 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:42c259ed-af7d-41af-a5f2-bcfbeccb5eab_446616cd-30c9-420e-848d-bee94a3551ec, sub_name:002e05aa-0dc4-4f1b-ba53-39cac0015b96, vol_name:cephfs) < ""
Dec  4 05:47:40 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "002e05aa-0dc4-4f1b-ba53-39cac0015b96", "snap_name": "42c259ed-af7d-41af-a5f2-bcfbeccb5eab", "force": true, "format": "json"}]: dispatch
Dec  4 05:47:40 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:42c259ed-af7d-41af-a5f2-bcfbeccb5eab, sub_name:002e05aa-0dc4-4f1b-ba53-39cac0015b96, vol_name:cephfs) < ""
Dec  4 05:47:40 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/002e05aa-0dc4-4f1b-ba53-39cac0015b96/.meta.tmp'
Dec  4 05:47:40 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/002e05aa-0dc4-4f1b-ba53-39cac0015b96/.meta.tmp' to config b'/volumes/_nogroup/002e05aa-0dc4-4f1b-ba53-39cac0015b96/.meta'
Dec  4 05:47:40 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:42c259ed-af7d-41af-a5f2-bcfbeccb5eab, sub_name:002e05aa-0dc4-4f1b-ba53-39cac0015b96, vol_name:cephfs) < ""
Dec  4 05:47:40 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1de31656-5fa1-4344-818a-900ef388b939", "format": "json"}]: dispatch
Dec  4 05:47:40 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1de31656-5fa1-4344-818a-900ef388b939, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:47:40 np0005545273 podman[256796]: 2025-12-04 10:47:40.960269311 +0000 UTC m=+0.061623142 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  4 05:47:40 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1de31656-5fa1-4344-818a-900ef388b939, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:47:40 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1de31656-5fa1-4344-818a-900ef388b939' of type subvolume
Dec  4 05:47:40 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:47:40.961+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1de31656-5fa1-4344-818a-900ef388b939' of type subvolume
Dec  4 05:47:40 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1de31656-5fa1-4344-818a-900ef388b939", "force": true, "format": "json"}]: dispatch
Dec  4 05:47:40 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:47:40 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1de31656-5fa1-4344-818a-900ef388b939'' moved to trashcan
Dec  4 05:47:40 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:47:40 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1de31656-5fa1-4344-818a-900ef388b939, vol_name:cephfs) < ""
Dec  4 05:47:41 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Dec  4 05:47:41 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:47:41.072192) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  4 05:47:41 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Dec  4 05:47:41 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845261072314, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1719, "num_deletes": 253, "total_data_size": 2210906, "memory_usage": 2244944, "flush_reason": "Manual Compaction"}
Dec  4 05:47:41 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Dec  4 05:47:41 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845261090750, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 2173089, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24536, "largest_seqno": 26254, "table_properties": {"data_size": 2165466, "index_size": 4245, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 18866, "raw_average_key_size": 21, "raw_value_size": 2148958, "raw_average_value_size": 2393, "num_data_blocks": 189, "num_entries": 898, "num_filter_entries": 898, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764845154, "oldest_key_time": 1764845154, "file_creation_time": 1764845261, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:47:41 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 18612 microseconds, and 10921 cpu microseconds.
Dec  4 05:47:41 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  4 05:47:41 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:47:41.090816) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 2173089 bytes OK
Dec  4 05:47:41 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:47:41.090853) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Dec  4 05:47:41 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:47:41.092882) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Dec  4 05:47:41 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:47:41.092903) EVENT_LOG_v1 {"time_micros": 1764845261092895, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  4 05:47:41 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:47:41.092926) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  4 05:47:41 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 2202916, prev total WAL file size 2202916, number of live WAL files 2.
Dec  4 05:47:41 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:47:41 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:47:41.094116) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Dec  4 05:47:41 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  4 05:47:41 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(2122KB)], [56(9281KB)]
Dec  4 05:47:41 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845261094147, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 11677006, "oldest_snapshot_seqno": -1}
Dec  4 05:47:41 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5638 keys, 9974100 bytes, temperature: kUnknown
Dec  4 05:47:41 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845261157589, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 9974100, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9933099, "index_size": 25787, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14149, "raw_key_size": 140410, "raw_average_key_size": 24, "raw_value_size": 9828761, "raw_average_value_size": 1743, "num_data_blocks": 1071, "num_entries": 5638, "num_filter_entries": 5638, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 1764845261, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:47:41 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  4 05:47:41 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:47:41.157910) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 9974100 bytes
Dec  4 05:47:41 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:47:41.159303) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.8 rd, 157.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 9.1 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(10.0) write-amplify(4.6) OK, records in: 6166, records dropped: 528 output_compression: NoCompression
Dec  4 05:47:41 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:47:41.159324) EVENT_LOG_v1 {"time_micros": 1764845261159313, "job": 30, "event": "compaction_finished", "compaction_time_micros": 63536, "compaction_time_cpu_micros": 21544, "output_level": 6, "num_output_files": 1, "total_output_size": 9974100, "num_input_records": 6166, "num_output_records": 5638, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  4 05:47:41 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:47:41 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845261159888, "job": 30, "event": "table_file_deletion", "file_number": 58}
Dec  4 05:47:41 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:47:41 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845261162073, "job": 30, "event": "table_file_deletion", "file_number": 56}
Dec  4 05:47:41 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:47:41.094007) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:47:41 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:47:41.162128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:47:41 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:47:41.162133) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:47:41 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:47:41.162135) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:47:41 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:47:41.162137) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:47:41 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:47:41.162139) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:47:41 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1144: 321 pgs: 321 active+clean; 71 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 118 KiB/s wr, 10 op/s
Dec  4 05:47:43 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1145: 321 pgs: 321 active+clean; 71 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s wr, 7 op/s
Dec  4 05:47:43 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "002e05aa-0dc4-4f1b-ba53-39cac0015b96", "format": "json"}]: dispatch
Dec  4 05:47:43 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:002e05aa-0dc4-4f1b-ba53-39cac0015b96, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:47:43 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:002e05aa-0dc4-4f1b-ba53-39cac0015b96, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:47:43 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:47:43.824+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '002e05aa-0dc4-4f1b-ba53-39cac0015b96' of type subvolume
Dec  4 05:47:43 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '002e05aa-0dc4-4f1b-ba53-39cac0015b96' of type subvolume
Dec  4 05:47:43 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "002e05aa-0dc4-4f1b-ba53-39cac0015b96", "force": true, "format": "json"}]: dispatch
Dec  4 05:47:43 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:002e05aa-0dc4-4f1b-ba53-39cac0015b96, vol_name:cephfs) < ""
Dec  4 05:47:43 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/002e05aa-0dc4-4f1b-ba53-39cac0015b96'' moved to trashcan
Dec  4 05:47:43 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:47:43 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:002e05aa-0dc4-4f1b-ba53-39cac0015b96, vol_name:cephfs) < ""
Dec  4 05:47:44 np0005545273 nova_compute[244644]: 2025-12-04 10:47:44.337 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:47:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:47:45 np0005545273 nova_compute[244644]: 2025-12-04 10:47:45.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:47:45 np0005545273 nova_compute[244644]: 2025-12-04 10:47:45.338 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  4 05:47:45 np0005545273 nova_compute[244644]: 2025-12-04 10:47:45.338 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  4 05:47:45 np0005545273 nova_compute[244644]: 2025-12-04 10:47:45.376 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  4 05:47:45 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1146: 321 pgs: 321 active+clean; 71 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s wr, 6 op/s
Dec  4 05:47:46 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Dec  4 05:47:46 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Dec  4 05:47:46 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Dec  4 05:47:47 np0005545273 nova_compute[244644]: 2025-12-04 10:47:47.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:47:47 np0005545273 nova_compute[244644]: 2025-12-04 10:47:47.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:47:47 np0005545273 nova_compute[244644]: 2025-12-04 10:47:47.393 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:47:47 np0005545273 nova_compute[244644]: 2025-12-04 10:47:47.393 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:47:47 np0005545273 nova_compute[244644]: 2025-12-04 10:47:47.393 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:47:47 np0005545273 nova_compute[244644]: 2025-12-04 10:47:47.394 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  4 05:47:47 np0005545273 nova_compute[244644]: 2025-12-04 10:47:47.394 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:47:47 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1148: 321 pgs: 321 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 65 KiB/s wr, 6 op/s
Dec  4 05:47:47 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:47:47 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/449490454' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:47:47 np0005545273 nova_compute[244644]: 2025-12-04 10:47:47.946 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:47:48 np0005545273 nova_compute[244644]: 2025-12-04 10:47:48.127 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  4 05:47:48 np0005545273 nova_compute[244644]: 2025-12-04 10:47:48.129 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5038MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  4 05:47:48 np0005545273 nova_compute[244644]: 2025-12-04 10:47:48.129 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:47:48 np0005545273 nova_compute[244644]: 2025-12-04 10:47:48.130 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:47:48 np0005545273 nova_compute[244644]: 2025-12-04 10:47:48.199 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  4 05:47:48 np0005545273 nova_compute[244644]: 2025-12-04 10:47:48.199 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  4 05:47:48 np0005545273 nova_compute[244644]: 2025-12-04 10:47:48.234 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:47:48 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:47:48 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2820531978' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:47:48 np0005545273 nova_compute[244644]: 2025-12-04 10:47:48.797 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:47:48 np0005545273 nova_compute[244644]: 2025-12-04 10:47:48.803 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  4 05:47:48 np0005545273 nova_compute[244644]: 2025-12-04 10:47:48.817 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  4 05:47:48 np0005545273 nova_compute[244644]: 2025-12-04 10:47:48.819 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  4 05:47:48 np0005545273 nova_compute[244644]: 2025-12-04 10:47:48.819 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:47:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:47:49 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1149: 321 pgs: 321 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 65 KiB/s wr, 6 op/s
Dec  4 05:47:49 np0005545273 nova_compute[244644]: 2025-12-04 10:47:49.814 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:47:49 np0005545273 nova_compute[244644]: 2025-12-04 10:47:49.883 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:47:49 np0005545273 podman[256862]: 2025-12-04 10:47:49.944975989 +0000 UTC m=+0.049574808 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  4 05:47:50 np0005545273 podman[256861]: 2025-12-04 10:47:50.005028443 +0000 UTC m=+0.112533483 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  4 05:47:51 np0005545273 nova_compute[244644]: 2025-12-04 10:47:51.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:47:51 np0005545273 nova_compute[244644]: 2025-12-04 10:47:51.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:47:51 np0005545273 nova_compute[244644]: 2025-12-04 10:47:51.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  4 05:47:51 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1150: 321 pgs: 321 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 65 KiB/s wr, 6 op/s
Dec  4 05:47:52 np0005545273 nova_compute[244644]: 2025-12-04 10:47:52.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:47:53 np0005545273 nova_compute[244644]: 2025-12-04 10:47:53.333 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:47:53 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1151: 321 pgs: 321 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 39 KiB/s wr, 4 op/s
Dec  4 05:47:53 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7de2ac86-d29c-49e9-b8b1-f1b9a7934340", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:47:53 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7de2ac86-d29c-49e9-b8b1-f1b9a7934340, vol_name:cephfs) < ""
Dec  4 05:47:53 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/7de2ac86-d29c-49e9-b8b1-f1b9a7934340/8a0ffa48-f0a7-4f73-a336-ef0dc6937c97'.
Dec  4 05:47:53 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7de2ac86-d29c-49e9-b8b1-f1b9a7934340/.meta.tmp'
Dec  4 05:47:53 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7de2ac86-d29c-49e9-b8b1-f1b9a7934340/.meta.tmp' to config b'/volumes/_nogroup/7de2ac86-d29c-49e9-b8b1-f1b9a7934340/.meta'
Dec  4 05:47:53 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7de2ac86-d29c-49e9-b8b1-f1b9a7934340, vol_name:cephfs) < ""
Dec  4 05:47:53 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7de2ac86-d29c-49e9-b8b1-f1b9a7934340", "format": "json"}]: dispatch
Dec  4 05:47:53 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7de2ac86-d29c-49e9-b8b1-f1b9a7934340, vol_name:cephfs) < ""
Dec  4 05:47:53 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7de2ac86-d29c-49e9-b8b1-f1b9a7934340, vol_name:cephfs) < ""
Dec  4 05:47:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:47:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:47:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:47:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Dec  4 05:47:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Dec  4 05:47:54 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Dec  4 05:47:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:47:54.916 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:47:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:47:54.916 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:47:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:47:54.916 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:47:55 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1153: 321 pgs: 321 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 651 B/s rd, 42 KiB/s wr, 4 op/s
Dec  4 05:47:55 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "59666e24-d766-4aa9-9e78-1be546c42532", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:47:55 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:59666e24-d766-4aa9-9e78-1be546c42532, vol_name:cephfs) < ""
Dec  4 05:47:55 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:47:56 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:47:56 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:47:56 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/59666e24-d766-4aa9-9e78-1be546c42532/1fd4480b-ac42-4524-a420-91fd304b251c'.
Dec  4 05:47:56 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:47:57 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/59666e24-d766-4aa9-9e78-1be546c42532/.meta.tmp'
Dec  4 05:47:57 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/59666e24-d766-4aa9-9e78-1be546c42532/.meta.tmp' to config b'/volumes/_nogroup/59666e24-d766-4aa9-9e78-1be546c42532/.meta'
Dec  4 05:47:57 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:59666e24-d766-4aa9-9e78-1be546c42532, vol_name:cephfs) < ""
Dec  4 05:47:57 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "59666e24-d766-4aa9-9e78-1be546c42532", "format": "json"}]: dispatch
Dec  4 05:47:57 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:59666e24-d766-4aa9-9e78-1be546c42532, vol_name:cephfs) < ""
Dec  4 05:47:57 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:59666e24-d766-4aa9-9e78-1be546c42532, vol_name:cephfs) < ""
Dec  4 05:47:57 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:47:57 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:47:57 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "7de2ac86-d29c-49e9-b8b1-f1b9a7934340", "snap_name": "6d026511-3379-4035-832a-6cafed93d0e8", "format": "json"}]: dispatch
Dec  4 05:47:57 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:6d026511-3379-4035-832a-6cafed93d0e8, sub_name:7de2ac86-d29c-49e9-b8b1-f1b9a7934340, vol_name:cephfs) < ""
Dec  4 05:47:57 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1154: 321 pgs: 321 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s wr, 2 op/s
Dec  4 05:47:57 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec  4 05:47:57 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec  4 05:47:57 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:47:57 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:47:57 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:47:57 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:47:57 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:47:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:47:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:47:57 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:47:57 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:47:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:6d026511-3379-4035-832a-6cafed93d0e8, sub_name:7de2ac86-d29c-49e9-b8b1-f1b9a7934340, vol_name:cephfs) < ""
Dec  4 05:47:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:47:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:47:58 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:47:58 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:47:58 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:47:58 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:47:58 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:47:58 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:47:58 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:47:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:47:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:47:58 np0005545273 podman[257118]: 2025-12-04 10:47:58.896518431 +0000 UTC m=+0.038576318 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:47:58 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "59666e24-d766-4aa9-9e78-1be546c42532", "snap_name": "3b2ce6b0-6dfd-411c-99c6-17e1f8e0a030", "format": "json"}]: dispatch
Dec  4 05:47:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:3b2ce6b0-6dfd-411c-99c6-17e1f8e0a030, sub_name:59666e24-d766-4aa9-9e78-1be546c42532, vol_name:cephfs) < ""
Dec  4 05:47:59 np0005545273 podman[257118]: 2025-12-04 10:47:59.388526971 +0000 UTC m=+0.530584888 container create 492f9eb23c0cea6ecb70b0b2508fe4680553949d612e75471b21635aac17fc43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_shirley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec  4 05:47:59 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec  4 05:47:59 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:47:59 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:47:59 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:47:59 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:3b2ce6b0-6dfd-411c-99c6-17e1f8e0a030, sub_name:59666e24-d766-4aa9-9e78-1be546c42532, vol_name:cephfs) < ""
Dec  4 05:47:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:47:59 np0005545273 systemd[1]: Started libpod-conmon-492f9eb23c0cea6ecb70b0b2508fe4680553949d612e75471b21635aac17fc43.scope.
Dec  4 05:47:59 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:47:59 np0005545273 podman[257118]: 2025-12-04 10:47:59.536122113 +0000 UTC m=+0.678180020 container init 492f9eb23c0cea6ecb70b0b2508fe4680553949d612e75471b21635aac17fc43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_shirley, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:47:59 np0005545273 podman[257118]: 2025-12-04 10:47:59.544166921 +0000 UTC m=+0.686224808 container start 492f9eb23c0cea6ecb70b0b2508fe4680553949d612e75471b21635aac17fc43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  4 05:47:59 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1155: 321 pgs: 321 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s wr, 2 op/s
Dec  4 05:47:59 np0005545273 affectionate_shirley[257134]: 167 167
Dec  4 05:47:59 np0005545273 systemd[1]: libpod-492f9eb23c0cea6ecb70b0b2508fe4680553949d612e75471b21635aac17fc43.scope: Deactivated successfully.
Dec  4 05:47:59 np0005545273 podman[257118]: 2025-12-04 10:47:59.550454955 +0000 UTC m=+0.692512862 container attach 492f9eb23c0cea6ecb70b0b2508fe4680553949d612e75471b21635aac17fc43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_shirley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  4 05:47:59 np0005545273 podman[257118]: 2025-12-04 10:47:59.551548732 +0000 UTC m=+0.693606619 container died 492f9eb23c0cea6ecb70b0b2508fe4680553949d612e75471b21635aac17fc43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_shirley, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec  4 05:47:59 np0005545273 systemd[1]: var-lib-containers-storage-overlay-5ea16818539dbbce0cd15192b9ab2fbda13a4bd23525e5480aa2e9bde5175053-merged.mount: Deactivated successfully.
Dec  4 05:47:59 np0005545273 podman[257118]: 2025-12-04 10:47:59.602776598 +0000 UTC m=+0.744834485 container remove 492f9eb23c0cea6ecb70b0b2508fe4680553949d612e75471b21635aac17fc43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec  4 05:47:59 np0005545273 systemd[1]: libpod-conmon-492f9eb23c0cea6ecb70b0b2508fe4680553949d612e75471b21635aac17fc43.scope: Deactivated successfully.
Dec  4 05:47:59 np0005545273 podman[257157]: 2025-12-04 10:47:59.788120076 +0000 UTC m=+0.048207734 container create f497c33bbffd47648935dbb3eda5b6e9ef253f0557abdb1c604fa7e861dda60f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:47:59 np0005545273 systemd[1]: Started libpod-conmon-f497c33bbffd47648935dbb3eda5b6e9ef253f0557abdb1c604fa7e861dda60f.scope.
Dec  4 05:47:59 np0005545273 podman[257157]: 2025-12-04 10:47:59.769032847 +0000 UTC m=+0.029120525 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:47:59 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:47:59 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0e72c214ae9742b0a6ebaa5ac923bd846d9bae366f691842002e5551afe6c2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:47:59 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0e72c214ae9742b0a6ebaa5ac923bd846d9bae366f691842002e5551afe6c2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:47:59 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0e72c214ae9742b0a6ebaa5ac923bd846d9bae366f691842002e5551afe6c2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:47:59 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0e72c214ae9742b0a6ebaa5ac923bd846d9bae366f691842002e5551afe6c2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:47:59 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0e72c214ae9742b0a6ebaa5ac923bd846d9bae366f691842002e5551afe6c2e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:47:59 np0005545273 podman[257157]: 2025-12-04 10:47:59.886471369 +0000 UTC m=+0.146559027 container init f497c33bbffd47648935dbb3eda5b6e9ef253f0557abdb1c604fa7e861dda60f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:47:59 np0005545273 podman[257157]: 2025-12-04 10:47:59.895912901 +0000 UTC m=+0.156000559 container start f497c33bbffd47648935dbb3eda5b6e9ef253f0557abdb1c604fa7e861dda60f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_poincare, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:47:59 np0005545273 podman[257157]: 2025-12-04 10:47:59.901451086 +0000 UTC m=+0.161538754 container attach f497c33bbffd47648935dbb3eda5b6e9ef253f0557abdb1c604fa7e861dda60f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_poincare, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec  4 05:48:00 np0005545273 hungry_poincare[257173]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:48:00 np0005545273 hungry_poincare[257173]: --> All data devices are unavailable
Dec  4 05:48:00 np0005545273 systemd[1]: libpod-f497c33bbffd47648935dbb3eda5b6e9ef253f0557abdb1c604fa7e861dda60f.scope: Deactivated successfully.
Dec  4 05:48:00 np0005545273 podman[257157]: 2025-12-04 10:48:00.408935567 +0000 UTC m=+0.669023215 container died f497c33bbffd47648935dbb3eda5b6e9ef253f0557abdb1c604fa7e861dda60f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_poincare, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec  4 05:48:01 np0005545273 systemd[1]: var-lib-containers-storage-overlay-c0e72c214ae9742b0a6ebaa5ac923bd846d9bae366f691842002e5551afe6c2e-merged.mount: Deactivated successfully.
Dec  4 05:48:01 np0005545273 podman[257157]: 2025-12-04 10:48:01.349584456 +0000 UTC m=+1.609672114 container remove f497c33bbffd47648935dbb3eda5b6e9ef253f0557abdb1c604fa7e861dda60f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_poincare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec  4 05:48:01 np0005545273 systemd[1]: libpod-conmon-f497c33bbffd47648935dbb3eda5b6e9ef253f0557abdb1c604fa7e861dda60f.scope: Deactivated successfully.
Dec  4 05:48:01 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1156: 321 pgs: 321 active+clean; 72 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s wr, 2 op/s
Dec  4 05:48:01 np0005545273 podman[257268]: 2025-12-04 10:48:01.891405059 +0000 UTC m=+0.049928096 container create cb97052c31d0a35b0983e46afbe8b237c778de277d26101d10fcc062ebb819ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_northcutt, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Dec  4 05:48:01 np0005545273 systemd[1]: Started libpod-conmon-cb97052c31d0a35b0983e46afbe8b237c778de277d26101d10fcc062ebb819ed.scope.
Dec  4 05:48:01 np0005545273 podman[257268]: 2025-12-04 10:48:01.869912932 +0000 UTC m=+0.028435989 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:48:01 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:48:01 np0005545273 podman[257268]: 2025-12-04 10:48:01.985091748 +0000 UTC m=+0.143614805 container init cb97052c31d0a35b0983e46afbe8b237c778de277d26101d10fcc062ebb819ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:48:01 np0005545273 podman[257268]: 2025-12-04 10:48:01.993363351 +0000 UTC m=+0.151886388 container start cb97052c31d0a35b0983e46afbe8b237c778de277d26101d10fcc062ebb819ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:48:01 np0005545273 podman[257268]: 2025-12-04 10:48:01.998502886 +0000 UTC m=+0.157026093 container attach cb97052c31d0a35b0983e46afbe8b237c778de277d26101d10fcc062ebb819ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_northcutt, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:48:02 np0005545273 trusting_northcutt[257284]: 167 167
Dec  4 05:48:02 np0005545273 systemd[1]: libpod-cb97052c31d0a35b0983e46afbe8b237c778de277d26101d10fcc062ebb819ed.scope: Deactivated successfully.
Dec  4 05:48:02 np0005545273 conmon[257284]: conmon cb97052c31d0a35b0983 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cb97052c31d0a35b0983e46afbe8b237c778de277d26101d10fcc062ebb819ed.scope/container/memory.events
Dec  4 05:48:02 np0005545273 podman[257268]: 2025-12-04 10:48:02.003619652 +0000 UTC m=+0.162142689 container died cb97052c31d0a35b0983e46afbe8b237c778de277d26101d10fcc062ebb819ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_northcutt, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  4 05:48:02 np0005545273 systemd[1]: var-lib-containers-storage-overlay-5ae1e244d3218490f8b66c33ae669b365ec570ce38f5b238859dd70f14ce93b3-merged.mount: Deactivated successfully.
Dec  4 05:48:02 np0005545273 podman[257268]: 2025-12-04 10:48:02.043488111 +0000 UTC m=+0.202011148 container remove cb97052c31d0a35b0983e46afbe8b237c778de277d26101d10fcc062ebb819ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_northcutt, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  4 05:48:02 np0005545273 systemd[1]: libpod-conmon-cb97052c31d0a35b0983e46afbe8b237c778de277d26101d10fcc062ebb819ed.scope: Deactivated successfully.
Dec  4 05:48:02 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7de2ac86-d29c-49e9-b8b1-f1b9a7934340", "snap_name": "6d026511-3379-4035-832a-6cafed93d0e8_33df231b-c8c4-45b8-9a3d-95830eea1273", "force": true, "format": "json"}]: dispatch
Dec  4 05:48:02 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6d026511-3379-4035-832a-6cafed93d0e8_33df231b-c8c4-45b8-9a3d-95830eea1273, sub_name:7de2ac86-d29c-49e9-b8b1-f1b9a7934340, vol_name:cephfs) < ""
Dec  4 05:48:02 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7de2ac86-d29c-49e9-b8b1-f1b9a7934340/.meta.tmp'
Dec  4 05:48:02 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7de2ac86-d29c-49e9-b8b1-f1b9a7934340/.meta.tmp' to config b'/volumes/_nogroup/7de2ac86-d29c-49e9-b8b1-f1b9a7934340/.meta'
Dec  4 05:48:02 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6d026511-3379-4035-832a-6cafed93d0e8_33df231b-c8c4-45b8-9a3d-95830eea1273, sub_name:7de2ac86-d29c-49e9-b8b1-f1b9a7934340, vol_name:cephfs) < ""
Dec  4 05:48:02 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7de2ac86-d29c-49e9-b8b1-f1b9a7934340", "snap_name": "6d026511-3379-4035-832a-6cafed93d0e8", "force": true, "format": "json"}]: dispatch
Dec  4 05:48:02 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6d026511-3379-4035-832a-6cafed93d0e8, sub_name:7de2ac86-d29c-49e9-b8b1-f1b9a7934340, vol_name:cephfs) < ""
Dec  4 05:48:02 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7de2ac86-d29c-49e9-b8b1-f1b9a7934340/.meta.tmp'
Dec  4 05:48:02 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7de2ac86-d29c-49e9-b8b1-f1b9a7934340/.meta.tmp' to config b'/volumes/_nogroup/7de2ac86-d29c-49e9-b8b1-f1b9a7934340/.meta'
Dec  4 05:48:02 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6d026511-3379-4035-832a-6cafed93d0e8, sub_name:7de2ac86-d29c-49e9-b8b1-f1b9a7934340, vol_name:cephfs) < ""
Dec  4 05:48:02 np0005545273 podman[257308]: 2025-12-04 10:48:02.222115043 +0000 UTC m=+0.047281041 container create 6691e6280665bdcb119b4307f69ccac7ad2d094ec369b6a4a05c299be6f04862 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hellman, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:48:02 np0005545273 systemd[1]: Started libpod-conmon-6691e6280665bdcb119b4307f69ccac7ad2d094ec369b6a4a05c299be6f04862.scope.
Dec  4 05:48:02 np0005545273 podman[257308]: 2025-12-04 10:48:02.201447426 +0000 UTC m=+0.026613444 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:48:02 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:48:02 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96281ea761eb1818f04ead82863831fc2ae167497d6a2619dd1c6e821e66b491/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:48:02 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96281ea761eb1818f04ead82863831fc2ae167497d6a2619dd1c6e821e66b491/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:48:02 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96281ea761eb1818f04ead82863831fc2ae167497d6a2619dd1c6e821e66b491/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:48:02 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96281ea761eb1818f04ead82863831fc2ae167497d6a2619dd1c6e821e66b491/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:48:02 np0005545273 podman[257308]: 2025-12-04 10:48:02.325465589 +0000 UTC m=+0.150631607 container init 6691e6280665bdcb119b4307f69ccac7ad2d094ec369b6a4a05c299be6f04862 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hellman, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:48:02 np0005545273 podman[257308]: 2025-12-04 10:48:02.335243459 +0000 UTC m=+0.160409457 container start 6691e6280665bdcb119b4307f69ccac7ad2d094ec369b6a4a05c299be6f04862 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hellman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:48:02 np0005545273 podman[257308]: 2025-12-04 10:48:02.338389516 +0000 UTC m=+0.163555534 container attach 6691e6280665bdcb119b4307f69ccac7ad2d094ec369b6a4a05c299be6f04862 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]: {
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:    "0": [
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:        {
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            "devices": [
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "/dev/loop3"
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            ],
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            "lv_name": "ceph_lv0",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            "lv_size": "21470642176",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            "name": "ceph_lv0",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            "tags": {
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.cluster_name": "ceph",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.crush_device_class": "",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.encrypted": "0",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.objectstore": "bluestore",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.osd_id": "0",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.type": "block",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.vdo": "0",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.with_tpm": "0"
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            },
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            "type": "block",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            "vg_name": "ceph_vg0"
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:        }
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:    ],
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:    "1": [
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:        {
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            "devices": [
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "/dev/loop4"
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            ],
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            "lv_name": "ceph_lv1",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            "lv_size": "21470642176",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            "name": "ceph_lv1",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            "tags": {
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.cluster_name": "ceph",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.crush_device_class": "",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.encrypted": "0",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.objectstore": "bluestore",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.osd_id": "1",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.type": "block",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.vdo": "0",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.with_tpm": "0"
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            },
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            "type": "block",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            "vg_name": "ceph_vg1"
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:        }
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:    ],
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:    "2": [
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:        {
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            "devices": [
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "/dev/loop5"
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            ],
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            "lv_name": "ceph_lv2",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            "lv_size": "21470642176",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            "name": "ceph_lv2",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            "tags": {
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.cluster_name": "ceph",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.crush_device_class": "",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.encrypted": "0",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.objectstore": "bluestore",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.osd_id": "2",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.type": "block",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.vdo": "0",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:                "ceph.with_tpm": "0"
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            },
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            "type": "block",
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:            "vg_name": "ceph_vg2"
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:        }
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]:    ]
Dec  4 05:48:02 np0005545273 upbeat_hellman[257325]: }
Dec  4 05:48:02 np0005545273 systemd[1]: libpod-6691e6280665bdcb119b4307f69ccac7ad2d094ec369b6a4a05c299be6f04862.scope: Deactivated successfully.
Dec  4 05:48:02 np0005545273 podman[257308]: 2025-12-04 10:48:02.664553328 +0000 UTC m=+0.489719326 container died 6691e6280665bdcb119b4307f69ccac7ad2d094ec369b6a4a05c299be6f04862 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:48:02 np0005545273 systemd[1]: var-lib-containers-storage-overlay-96281ea761eb1818f04ead82863831fc2ae167497d6a2619dd1c6e821e66b491-merged.mount: Deactivated successfully.
Dec  4 05:48:02 np0005545273 podman[257308]: 2025-12-04 10:48:02.70862975 +0000 UTC m=+0.533795748 container remove 6691e6280665bdcb119b4307f69ccac7ad2d094ec369b6a4a05c299be6f04862 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:48:02 np0005545273 systemd[1]: libpod-conmon-6691e6280665bdcb119b4307f69ccac7ad2d094ec369b6a4a05c299be6f04862.scope: Deactivated successfully.
Dec  4 05:48:03 np0005545273 podman[257408]: 2025-12-04 10:48:03.262545719 +0000 UTC m=+0.104018532 container create c803963861a3af7c9369bb3fb892df144a719b59348410e95ac01c6d82f02e14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:48:03 np0005545273 podman[257408]: 2025-12-04 10:48:03.18552673 +0000 UTC m=+0.026999573 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:48:03 np0005545273 systemd[1]: Started libpod-conmon-c803963861a3af7c9369bb3fb892df144a719b59348410e95ac01c6d82f02e14.scope.
Dec  4 05:48:03 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:48:03 np0005545273 podman[257408]: 2025-12-04 10:48:03.34406481 +0000 UTC m=+0.185537633 container init c803963861a3af7c9369bb3fb892df144a719b59348410e95ac01c6d82f02e14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_khayyam, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  4 05:48:03 np0005545273 podman[257408]: 2025-12-04 10:48:03.35301783 +0000 UTC m=+0.194490633 container start c803963861a3af7c9369bb3fb892df144a719b59348410e95ac01c6d82f02e14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_khayyam, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:48:03 np0005545273 vigilant_khayyam[257424]: 167 167
Dec  4 05:48:03 np0005545273 podman[257408]: 2025-12-04 10:48:03.358498604 +0000 UTC m=+0.199971437 container attach c803963861a3af7c9369bb3fb892df144a719b59348410e95ac01c6d82f02e14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_khayyam, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:48:03 np0005545273 systemd[1]: libpod-c803963861a3af7c9369bb3fb892df144a719b59348410e95ac01c6d82f02e14.scope: Deactivated successfully.
Dec  4 05:48:03 np0005545273 podman[257408]: 2025-12-04 10:48:03.360979545 +0000 UTC m=+0.202452358 container died c803963861a3af7c9369bb3fb892df144a719b59348410e95ac01c6d82f02e14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_khayyam, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle)
Dec  4 05:48:03 np0005545273 systemd[1]: var-lib-containers-storage-overlay-fb53b636944ff3c3a8f4f1e610e5f87327bcb8752f75890f1b09effec486c7fc-merged.mount: Deactivated successfully.
Dec  4 05:48:03 np0005545273 podman[257408]: 2025-12-04 10:48:03.406328627 +0000 UTC m=+0.247801440 container remove c803963861a3af7c9369bb3fb892df144a719b59348410e95ac01c6d82f02e14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  4 05:48:03 np0005545273 systemd[1]: libpod-conmon-c803963861a3af7c9369bb3fb892df144a719b59348410e95ac01c6d82f02e14.scope: Deactivated successfully.
Dec  4 05:48:03 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1157: 321 pgs: 321 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s wr, 3 op/s
Dec  4 05:48:03 np0005545273 podman[257449]: 2025-12-04 10:48:03.575048746 +0000 UTC m=+0.045080316 container create e6d937a47e7e08a694cf7e07cfcb3b168b901a957689aa3cf8f35e345088b0f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_fermat, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle)
Dec  4 05:48:03 np0005545273 systemd[1]: Started libpod-conmon-e6d937a47e7e08a694cf7e07cfcb3b168b901a957689aa3cf8f35e345088b0f8.scope.
Dec  4 05:48:03 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:48:03 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee8842a51e5d633724782f11da4dfc0551cd0c0278bbb2e078b745713adbaafb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:48:03 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee8842a51e5d633724782f11da4dfc0551cd0c0278bbb2e078b745713adbaafb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:48:03 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee8842a51e5d633724782f11da4dfc0551cd0c0278bbb2e078b745713adbaafb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:48:03 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee8842a51e5d633724782f11da4dfc0551cd0c0278bbb2e078b745713adbaafb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:48:03 np0005545273 podman[257449]: 2025-12-04 10:48:03.553266283 +0000 UTC m=+0.023297873 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:48:03 np0005545273 podman[257449]: 2025-12-04 10:48:03.650770765 +0000 UTC m=+0.120802365 container init e6d937a47e7e08a694cf7e07cfcb3b168b901a957689aa3cf8f35e345088b0f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:48:03 np0005545273 podman[257449]: 2025-12-04 10:48:03.66480939 +0000 UTC m=+0.134840960 container start e6d937a47e7e08a694cf7e07cfcb3b168b901a957689aa3cf8f35e345088b0f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_fermat, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec  4 05:48:03 np0005545273 podman[257449]: 2025-12-04 10:48:03.668214053 +0000 UTC m=+0.138245653 container attach e6d937a47e7e08a694cf7e07cfcb3b168b901a957689aa3cf8f35e345088b0f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_fermat, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:48:03 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "59666e24-d766-4aa9-9e78-1be546c42532", "snap_name": "3b2ce6b0-6dfd-411c-99c6-17e1f8e0a030_17447fd6-7690-4bc1-b036-20af66e1ccf6", "force": true, "format": "json"}]: dispatch
Dec  4 05:48:03 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3b2ce6b0-6dfd-411c-99c6-17e1f8e0a030_17447fd6-7690-4bc1-b036-20af66e1ccf6, sub_name:59666e24-d766-4aa9-9e78-1be546c42532, vol_name:cephfs) < ""
Dec  4 05:48:03 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/59666e24-d766-4aa9-9e78-1be546c42532/.meta.tmp'
Dec  4 05:48:03 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/59666e24-d766-4aa9-9e78-1be546c42532/.meta.tmp' to config b'/volumes/_nogroup/59666e24-d766-4aa9-9e78-1be546c42532/.meta'
Dec  4 05:48:03 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3b2ce6b0-6dfd-411c-99c6-17e1f8e0a030_17447fd6-7690-4bc1-b036-20af66e1ccf6, sub_name:59666e24-d766-4aa9-9e78-1be546c42532, vol_name:cephfs) < ""
Dec  4 05:48:03 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "59666e24-d766-4aa9-9e78-1be546c42532", "snap_name": "3b2ce6b0-6dfd-411c-99c6-17e1f8e0a030", "force": true, "format": "json"}]: dispatch
Dec  4 05:48:03 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3b2ce6b0-6dfd-411c-99c6-17e1f8e0a030, sub_name:59666e24-d766-4aa9-9e78-1be546c42532, vol_name:cephfs) < ""
Dec  4 05:48:03 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/59666e24-d766-4aa9-9e78-1be546c42532/.meta.tmp'
Dec  4 05:48:03 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/59666e24-d766-4aa9-9e78-1be546c42532/.meta.tmp' to config b'/volumes/_nogroup/59666e24-d766-4aa9-9e78-1be546c42532/.meta'
Dec  4 05:48:03 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3b2ce6b0-6dfd-411c-99c6-17e1f8e0a030, sub_name:59666e24-d766-4aa9-9e78-1be546c42532, vol_name:cephfs) < ""
Dec  4 05:48:04 np0005545273 lvm[257543]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:48:04 np0005545273 lvm[257543]: VG ceph_vg0 finished
Dec  4 05:48:04 np0005545273 lvm[257544]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:48:04 np0005545273 lvm[257544]: VG ceph_vg1 finished
Dec  4 05:48:04 np0005545273 lvm[257546]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:48:04 np0005545273 lvm[257546]: VG ceph_vg2 finished
Dec  4 05:48:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:48:04 np0005545273 inspiring_fermat[257465]: {}
Dec  4 05:48:04 np0005545273 systemd[1]: libpod-e6d937a47e7e08a694cf7e07cfcb3b168b901a957689aa3cf8f35e345088b0f8.scope: Deactivated successfully.
Dec  4 05:48:04 np0005545273 systemd[1]: libpod-e6d937a47e7e08a694cf7e07cfcb3b168b901a957689aa3cf8f35e345088b0f8.scope: Consumed 1.365s CPU time.
Dec  4 05:48:04 np0005545273 podman[257449]: 2025-12-04 10:48:04.513887331 +0000 UTC m=+0.983918901 container died e6d937a47e7e08a694cf7e07cfcb3b168b901a957689aa3cf8f35e345088b0f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_fermat, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:48:04 np0005545273 systemd[1]: var-lib-containers-storage-overlay-ee8842a51e5d633724782f11da4dfc0551cd0c0278bbb2e078b745713adbaafb-merged.mount: Deactivated successfully.
Dec  4 05:48:04 np0005545273 podman[257449]: 2025-12-04 10:48:04.654531021 +0000 UTC m=+1.124562611 container remove e6d937a47e7e08a694cf7e07cfcb3b168b901a957689aa3cf8f35e345088b0f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_fermat, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec  4 05:48:04 np0005545273 systemd[1]: libpod-conmon-e6d937a47e7e08a694cf7e07cfcb3b168b901a957689aa3cf8f35e345088b0f8.scope: Deactivated successfully.
Dec  4 05:48:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:48:04 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:48:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:48:04 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:48:05 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:48:05 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:48:05 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7de2ac86-d29c-49e9-b8b1-f1b9a7934340", "format": "json"}]: dispatch
Dec  4 05:48:05 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7de2ac86-d29c-49e9-b8b1-f1b9a7934340, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:48:05 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7de2ac86-d29c-49e9-b8b1-f1b9a7934340, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:48:05 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:48:05.502+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7de2ac86-d29c-49e9-b8b1-f1b9a7934340' of type subvolume
Dec  4 05:48:05 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7de2ac86-d29c-49e9-b8b1-f1b9a7934340' of type subvolume
Dec  4 05:48:05 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7de2ac86-d29c-49e9-b8b1-f1b9a7934340", "force": true, "format": "json"}]: dispatch
Dec  4 05:48:05 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7de2ac86-d29c-49e9-b8b1-f1b9a7934340, vol_name:cephfs) < ""
Dec  4 05:48:05 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7de2ac86-d29c-49e9-b8b1-f1b9a7934340'' moved to trashcan
Dec  4 05:48:05 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:48:05 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7de2ac86-d29c-49e9-b8b1-f1b9a7934340, vol_name:cephfs) < ""
Dec  4 05:48:05 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1158: 321 pgs: 321 active+clean; 72 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s wr, 3 op/s
Dec  4 05:48:06 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Dec  4 05:48:06 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Dec  4 05:48:06 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Dec  4 05:48:07 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "59666e24-d766-4aa9-9e78-1be546c42532", "format": "json"}]: dispatch
Dec  4 05:48:07 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:59666e24-d766-4aa9-9e78-1be546c42532, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:48:07 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:59666e24-d766-4aa9-9e78-1be546c42532, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:48:07 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:48:07.136+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '59666e24-d766-4aa9-9e78-1be546c42532' of type subvolume
Dec  4 05:48:07 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '59666e24-d766-4aa9-9e78-1be546c42532' of type subvolume
Dec  4 05:48:07 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "59666e24-d766-4aa9-9e78-1be546c42532", "force": true, "format": "json"}]: dispatch
Dec  4 05:48:07 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:59666e24-d766-4aa9-9e78-1be546c42532, vol_name:cephfs) < ""
Dec  4 05:48:07 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/59666e24-d766-4aa9-9e78-1be546c42532'' moved to trashcan
Dec  4 05:48:07 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:48:07 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:59666e24-d766-4aa9-9e78-1be546c42532, vol_name:cephfs) < ""
Dec  4 05:48:07 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1160: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 91 KiB/s wr, 5 op/s
Dec  4 05:48:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:48:09 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1161: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 91 KiB/s wr, 5 op/s
Dec  4 05:48:10 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:48:10.023 156095 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'aa:78:67', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:d2:c7:24:ee:78'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  4 05:48:10 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:48:10.024 156095 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  4 05:48:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  4 05:48:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2279036852' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec  4 05:48:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  4 05:48:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2279036852' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec  4 05:48:11 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1162: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 92 KiB/s wr, 6 op/s
Dec  4 05:48:11 np0005545273 podman[257588]: 2025-12-04 10:48:11.981376851 +0000 UTC m=+0.072185282 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:48:13 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1163: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 71 KiB/s wr, 6 op/s
Dec  4 05:48:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:48:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Dec  4 05:48:15 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Dec  4 05:48:15 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1165: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 890 B/s rd, 77 KiB/s wr, 7 op/s
Dec  4 05:48:16 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Dec  4 05:48:17 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1166: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 25 KiB/s wr, 3 op/s
Dec  4 05:48:19 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1167: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 25 KiB/s wr, 3 op/s
Dec  4 05:48:19 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:48:20 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:48:20.026 156095 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=565580d5-3422-4e11-b563-3f1a3db67238, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  4 05:48:20 np0005545273 podman[257613]: 2025-12-04 10:48:20.949158942 +0000 UTC m=+0.057315237 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Dec  4 05:48:20 np0005545273 podman[257612]: 2025-12-04 10:48:20.986090489 +0000 UTC m=+0.094370327 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Dec  4 05:48:21 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1168: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 25 KiB/s wr, 2 op/s
Dec  4 05:48:23 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1169: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s wr, 0 op/s
Dec  4 05:48:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:48:25 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1170: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s wr, 0 op/s
Dec  4 05:48:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:48:26
Dec  4 05:48:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:48:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:48:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['default.rgw.control', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'volumes', '.rgw.root', 'backups', '.mgr', 'cephfs.cephfs.meta', 'vms']
Dec  4 05:48:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:48:27 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1171: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s wr, 0 op/s
Dec  4 05:48:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:48:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:48:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:48:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:48:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:48:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:48:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:48:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:48:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:48:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:48:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:48:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:48:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:48:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:48:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:48:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:48:29 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1172: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s wr, 0 op/s
Dec  4 05:48:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:48:31 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1173: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s wr, 0 op/s
Dec  4 05:48:33 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1174: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s wr, 0 op/s
Dec  4 05:48:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:48:35 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1175: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:48:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:48:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:48:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:48:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:48:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:48:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:48:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:48:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:48:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:48:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:48:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006660922644713851 of space, bias 1.0, pg target 0.19982767934141552 quantized to 32 (current 32)
Dec  4 05:48:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:48:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005189120990733053 of space, bias 4.0, pg target 0.6226945188879663 quantized to 16 (current 32)
Dec  4 05:48:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:48:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Dec  4 05:48:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:48:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:48:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:48:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 05:48:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:48:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:48:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:48:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 05:48:37 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1176: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:48:39 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1177: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:48:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:48:41 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1178: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:48:42 np0005545273 podman[257657]: 2025-12-04 10:48:42.94335199 +0000 UTC m=+0.055064781 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec  4 05:48:43 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1179: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:48:44 np0005545273 nova_compute[244644]: 2025-12-04 10:48:44.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:48:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:48:45 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1180: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:48:46 np0005545273 nova_compute[244644]: 2025-12-04 10:48:46.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:48:46 np0005545273 nova_compute[244644]: 2025-12-04 10:48:46.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  4 05:48:46 np0005545273 nova_compute[244644]: 2025-12-04 10:48:46.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  4 05:48:46 np0005545273 nova_compute[244644]: 2025-12-04 10:48:46.699 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  4 05:48:46 np0005545273 nova_compute[244644]: 2025-12-04 10:48:46.700 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:48:46 np0005545273 nova_compute[244644]: 2025-12-04 10:48:46.700 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  4 05:48:46 np0005545273 nova_compute[244644]: 2025-12-04 10:48:46.723 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  4 05:48:47 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:48:47.186 156095 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'aa:78:67', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:d2:c7:24:ee:78'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  4 05:48:47 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:48:47.187 156095 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  4 05:48:47 np0005545273 nova_compute[244644]: 2025-12-04 10:48:47.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:48:47 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1181: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:48:48 np0005545273 nova_compute[244644]: 2025-12-04 10:48:48.354 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:48:49 np0005545273 nova_compute[244644]: 2025-12-04 10:48:49.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:48:49 np0005545273 nova_compute[244644]: 2025-12-04 10:48:49.387 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:48:49 np0005545273 nova_compute[244644]: 2025-12-04 10:48:49.388 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:48:49 np0005545273 nova_compute[244644]: 2025-12-04 10:48:49.388 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:48:49 np0005545273 nova_compute[244644]: 2025-12-04 10:48:49.388 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  4 05:48:49 np0005545273 nova_compute[244644]: 2025-12-04 10:48:49.389 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:48:49 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1182: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:48:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:48:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:48:49 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1328841810' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:48:49 np0005545273 nova_compute[244644]: 2025-12-04 10:48:49.940 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:48:50 np0005545273 nova_compute[244644]: 2025-12-04 10:48:50.099 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  4 05:48:50 np0005545273 nova_compute[244644]: 2025-12-04 10:48:50.100 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5026MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  4 05:48:50 np0005545273 nova_compute[244644]: 2025-12-04 10:48:50.100 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:48:50 np0005545273 nova_compute[244644]: 2025-12-04 10:48:50.101 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:48:50 np0005545273 nova_compute[244644]: 2025-12-04 10:48:50.483 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  4 05:48:50 np0005545273 nova_compute[244644]: 2025-12-04 10:48:50.483 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  4 05:48:50 np0005545273 nova_compute[244644]: 2025-12-04 10:48:50.653 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Refreshing inventories for resource provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  4 05:48:50 np0005545273 nova_compute[244644]: 2025-12-04 10:48:50.749 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Updating ProviderTree inventory for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  4 05:48:50 np0005545273 nova_compute[244644]: 2025-12-04 10:48:50.749 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Updating inventory in ProviderTree for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  4 05:48:50 np0005545273 nova_compute[244644]: 2025-12-04 10:48:50.785 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Refreshing aggregate associations for resource provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  4 05:48:50 np0005545273 nova_compute[244644]: 2025-12-04 10:48:50.810 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Refreshing trait associations for resource provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f, traits: COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE,HW_CPU_X86_ABM,HW_CPU_X86_F16C,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AVX2,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_FMA3,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  4 05:48:50 np0005545273 nova_compute[244644]: 2025-12-04 10:48:50.839 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:48:51 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:48:51 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3154574257' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:48:51 np0005545273 nova_compute[244644]: 2025-12-04 10:48:51.380 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:48:51 np0005545273 nova_compute[244644]: 2025-12-04 10:48:51.387 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  4 05:48:51 np0005545273 nova_compute[244644]: 2025-12-04 10:48:51.407 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  4 05:48:51 np0005545273 nova_compute[244644]: 2025-12-04 10:48:51.408 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  4 05:48:51 np0005545273 nova_compute[244644]: 2025-12-04 10:48:51.409 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.308s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:48:51 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1183: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:48:51 np0005545273 podman[257722]: 2025-12-04 10:48:51.962873399 +0000 UTC m=+0.063555111 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  4 05:48:52 np0005545273 podman[257721]: 2025-12-04 10:48:52.001136207 +0000 UTC m=+0.104706670 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible)
Dec  4 05:48:53 np0005545273 nova_compute[244644]: 2025-12-04 10:48:53.409 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:48:53 np0005545273 nova_compute[244644]: 2025-12-04 10:48:53.410 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:48:53 np0005545273 nova_compute[244644]: 2025-12-04 10:48:53.410 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:48:53 np0005545273 nova_compute[244644]: 2025-12-04 10:48:53.411 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:48:53 np0005545273 nova_compute[244644]: 2025-12-04 10:48:53.411 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  4 05:48:53 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1184: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:48:54 np0005545273 nova_compute[244644]: 2025-12-04 10:48:54.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:48:54 np0005545273 nova_compute[244644]: 2025-12-04 10:48:54.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:48:54 np0005545273 nova_compute[244644]: 2025-12-04 10:48:54.338 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  4 05:48:54 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1db2d22c-803f-4ebe-b241-8ba03a81e7dc", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:48:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1db2d22c-803f-4ebe-b241-8ba03a81e7dc, vol_name:cephfs) < ""
Dec  4 05:48:54 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/1db2d22c-803f-4ebe-b241-8ba03a81e7dc/3c9b0285-3124-4ba7-b951-215aec98e0e4'.
Dec  4 05:48:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1db2d22c-803f-4ebe-b241-8ba03a81e7dc/.meta.tmp'
Dec  4 05:48:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1db2d22c-803f-4ebe-b241-8ba03a81e7dc/.meta.tmp' to config b'/volumes/_nogroup/1db2d22c-803f-4ebe-b241-8ba03a81e7dc/.meta'
Dec  4 05:48:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1db2d22c-803f-4ebe-b241-8ba03a81e7dc, vol_name:cephfs) < ""
Dec  4 05:48:54 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1db2d22c-803f-4ebe-b241-8ba03a81e7dc", "format": "json"}]: dispatch
Dec  4 05:48:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1db2d22c-803f-4ebe-b241-8ba03a81e7dc, vol_name:cephfs) < ""
Dec  4 05:48:54 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1db2d22c-803f-4ebe-b241-8ba03a81e7dc, vol_name:cephfs) < ""
Dec  4 05:48:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:48:54 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:48:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:48:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:48:54.917 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:48:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:48:54.918 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:48:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:48:54.919 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:48:55 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1185: 321 pgs: 321 active+clean; 73 MiB data, 298 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:48:56 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:48:56.189 156095 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=565580d5-3422-4e11-b563-3f1a3db67238, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  4 05:48:57 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1186: 321 pgs: 321 active+clean; 73 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s wr, 0 op/s
Dec  4 05:48:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:48:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:48:58 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "1db2d22c-803f-4ebe-b241-8ba03a81e7dc", "snap_name": "c8af7113-93d2-4d4c-9380-c06be20483a6", "format": "json"}]: dispatch
Dec  4 05:48:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c8af7113-93d2-4d4c-9380-c06be20483a6, sub_name:1db2d22c-803f-4ebe-b241-8ba03a81e7dc, vol_name:cephfs) < ""
Dec  4 05:48:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:48:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:48:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:48:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:48:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c8af7113-93d2-4d4c-9380-c06be20483a6, sub_name:1db2d22c-803f-4ebe-b241-8ba03a81e7dc, vol_name:cephfs) < ""
Dec  4 05:48:59 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1187: 321 pgs: 321 active+clean; 73 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s wr, 0 op/s
Dec  4 05:48:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:49:01 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1188: 321 pgs: 321 active+clean; 73 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s wr, 0 op/s
Dec  4 05:49:01 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1db2d22c-803f-4ebe-b241-8ba03a81e7dc", "snap_name": "c8af7113-93d2-4d4c-9380-c06be20483a6_e1626f5c-e61a-4e5c-8eae-8ed43c8ee857", "force": true, "format": "json"}]: dispatch
Dec  4 05:49:01 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c8af7113-93d2-4d4c-9380-c06be20483a6_e1626f5c-e61a-4e5c-8eae-8ed43c8ee857, sub_name:1db2d22c-803f-4ebe-b241-8ba03a81e7dc, vol_name:cephfs) < ""
Dec  4 05:49:01 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1db2d22c-803f-4ebe-b241-8ba03a81e7dc/.meta.tmp'
Dec  4 05:49:01 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1db2d22c-803f-4ebe-b241-8ba03a81e7dc/.meta.tmp' to config b'/volumes/_nogroup/1db2d22c-803f-4ebe-b241-8ba03a81e7dc/.meta'
Dec  4 05:49:01 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c8af7113-93d2-4d4c-9380-c06be20483a6_e1626f5c-e61a-4e5c-8eae-8ed43c8ee857, sub_name:1db2d22c-803f-4ebe-b241-8ba03a81e7dc, vol_name:cephfs) < ""
Dec  4 05:49:01 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1db2d22c-803f-4ebe-b241-8ba03a81e7dc", "snap_name": "c8af7113-93d2-4d4c-9380-c06be20483a6", "force": true, "format": "json"}]: dispatch
Dec  4 05:49:01 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c8af7113-93d2-4d4c-9380-c06be20483a6, sub_name:1db2d22c-803f-4ebe-b241-8ba03a81e7dc, vol_name:cephfs) < ""
Dec  4 05:49:01 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1db2d22c-803f-4ebe-b241-8ba03a81e7dc/.meta.tmp'
Dec  4 05:49:01 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1db2d22c-803f-4ebe-b241-8ba03a81e7dc/.meta.tmp' to config b'/volumes/_nogroup/1db2d22c-803f-4ebe-b241-8ba03a81e7dc/.meta'
Dec  4 05:49:01 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c8af7113-93d2-4d4c-9380-c06be20483a6, sub_name:1db2d22c-803f-4ebe-b241-8ba03a81e7dc, vol_name:cephfs) < ""
Dec  4 05:49:03 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1189: 321 pgs: 321 active+clean; 73 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s wr, 2 op/s
Dec  4 05:49:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:49:05 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1db2d22c-803f-4ebe-b241-8ba03a81e7dc", "format": "json"}]: dispatch
Dec  4 05:49:05 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1db2d22c-803f-4ebe-b241-8ba03a81e7dc, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:49:05 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1db2d22c-803f-4ebe-b241-8ba03a81e7dc, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:49:05 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:49:05.371+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1db2d22c-803f-4ebe-b241-8ba03a81e7dc' of type subvolume
Dec  4 05:49:05 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1db2d22c-803f-4ebe-b241-8ba03a81e7dc' of type subvolume
Dec  4 05:49:05 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1db2d22c-803f-4ebe-b241-8ba03a81e7dc", "force": true, "format": "json"}]: dispatch
Dec  4 05:49:05 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1db2d22c-803f-4ebe-b241-8ba03a81e7dc, vol_name:cephfs) < ""
Dec  4 05:49:05 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1db2d22c-803f-4ebe-b241-8ba03a81e7dc'' moved to trashcan
Dec  4 05:49:05 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:49:05 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1db2d22c-803f-4ebe-b241-8ba03a81e7dc, vol_name:cephfs) < ""
Dec  4 05:49:05 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1190: 321 pgs: 321 active+clean; 73 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s wr, 2 op/s
Dec  4 05:49:05 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:49:05 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:49:05 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:49:05 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:49:05 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:49:05 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:49:05 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:49:05 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:49:05 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:49:05 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:49:05 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:49:05 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:49:05 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Dec  4 05:49:06 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Dec  4 05:49:06 np0005545273 podman[257911]: 2025-12-04 10:49:06.022935587 +0000 UTC m=+0.023895088 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:49:06 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Dec  4 05:49:06 np0005545273 podman[257911]: 2025-12-04 10:49:06.518873725 +0000 UTC m=+0.519833206 container create 8c030475caa713d4ac8ffd3a6ce9881088bbc68ec0f05dc62fcd232606dbdd06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:49:06 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:49:06 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:49:06 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:49:06 np0005545273 systemd[1]: Started libpod-conmon-8c030475caa713d4ac8ffd3a6ce9881088bbc68ec0f05dc62fcd232606dbdd06.scope.
Dec  4 05:49:06 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:49:06 np0005545273 podman[257911]: 2025-12-04 10:49:06.713452709 +0000 UTC m=+0.714412210 container init 8c030475caa713d4ac8ffd3a6ce9881088bbc68ec0f05dc62fcd232606dbdd06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_booth, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:49:06 np0005545273 podman[257911]: 2025-12-04 10:49:06.722984453 +0000 UTC m=+0.723943934 container start 8c030475caa713d4ac8ffd3a6ce9881088bbc68ec0f05dc62fcd232606dbdd06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_booth, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  4 05:49:06 np0005545273 podman[257911]: 2025-12-04 10:49:06.726943789 +0000 UTC m=+0.727903300 container attach 8c030475caa713d4ac8ffd3a6ce9881088bbc68ec0f05dc62fcd232606dbdd06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_booth, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:49:06 np0005545273 adoring_booth[257927]: 167 167
Dec  4 05:49:06 np0005545273 systemd[1]: libpod-8c030475caa713d4ac8ffd3a6ce9881088bbc68ec0f05dc62fcd232606dbdd06.scope: Deactivated successfully.
Dec  4 05:49:06 np0005545273 podman[257911]: 2025-12-04 10:49:06.732493625 +0000 UTC m=+0.733453116 container died 8c030475caa713d4ac8ffd3a6ce9881088bbc68ec0f05dc62fcd232606dbdd06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_booth, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  4 05:49:06 np0005545273 systemd[1]: var-lib-containers-storage-overlay-085a80b3b9e43448c1521c88b2d0675c8fd7670f89fd20db187915f7a7ef0dab-merged.mount: Deactivated successfully.
Dec  4 05:49:06 np0005545273 podman[257911]: 2025-12-04 10:49:06.783082147 +0000 UTC m=+0.784041628 container remove 8c030475caa713d4ac8ffd3a6ce9881088bbc68ec0f05dc62fcd232606dbdd06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_booth, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  4 05:49:06 np0005545273 systemd[1]: libpod-conmon-8c030475caa713d4ac8ffd3a6ce9881088bbc68ec0f05dc62fcd232606dbdd06.scope: Deactivated successfully.
Dec  4 05:49:06 np0005545273 podman[257952]: 2025-12-04 10:49:06.963573835 +0000 UTC m=+0.050425949 container create 8924987769b083d2730a0954ee8da590a058b3c53bd61b0e9948e7a2511a5565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:49:07 np0005545273 systemd[1]: Started libpod-conmon-8924987769b083d2730a0954ee8da590a058b3c53bd61b0e9948e7a2511a5565.scope.
Dec  4 05:49:07 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:49:07 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8835d61c306bcbbb7dedfd2cd03f0fb12b44baa0fa40f53fb46e09cbeae5b9ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:49:07 np0005545273 podman[257952]: 2025-12-04 10:49:06.945184584 +0000 UTC m=+0.032036718 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:49:07 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8835d61c306bcbbb7dedfd2cd03f0fb12b44baa0fa40f53fb46e09cbeae5b9ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:49:07 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8835d61c306bcbbb7dedfd2cd03f0fb12b44baa0fa40f53fb46e09cbeae5b9ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:49:07 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8835d61c306bcbbb7dedfd2cd03f0fb12b44baa0fa40f53fb46e09cbeae5b9ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:49:07 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8835d61c306bcbbb7dedfd2cd03f0fb12b44baa0fa40f53fb46e09cbeae5b9ff/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:49:07 np0005545273 podman[257952]: 2025-12-04 10:49:07.178078218 +0000 UTC m=+0.264930332 container init 8924987769b083d2730a0954ee8da590a058b3c53bd61b0e9948e7a2511a5565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:49:07 np0005545273 podman[257952]: 2025-12-04 10:49:07.184255769 +0000 UTC m=+0.271107883 container start 8924987769b083d2730a0954ee8da590a058b3c53bd61b0e9948e7a2511a5565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mestorf, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:49:07 np0005545273 podman[257952]: 2025-12-04 10:49:07.412838947 +0000 UTC m=+0.499691061 container attach 8924987769b083d2730a0954ee8da590a058b3c53bd61b0e9948e7a2511a5565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mestorf, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:49:07 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1192: 321 pgs: 321 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 60 KiB/s wr, 3 op/s
Dec  4 05:49:07 np0005545273 bold_mestorf[257969]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:49:07 np0005545273 bold_mestorf[257969]: --> All data devices are unavailable
Dec  4 05:49:07 np0005545273 systemd[1]: libpod-8924987769b083d2730a0954ee8da590a058b3c53bd61b0e9948e7a2511a5565.scope: Deactivated successfully.
Dec  4 05:49:07 np0005545273 podman[257952]: 2025-12-04 10:49:07.664731197 +0000 UTC m=+0.751583311 container died 8924987769b083d2730a0954ee8da590a058b3c53bd61b0e9948e7a2511a5565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:49:07 np0005545273 systemd[1]: var-lib-containers-storage-overlay-8835d61c306bcbbb7dedfd2cd03f0fb12b44baa0fa40f53fb46e09cbeae5b9ff-merged.mount: Deactivated successfully.
Dec  4 05:49:07 np0005545273 podman[257952]: 2025-12-04 10:49:07.957974312 +0000 UTC m=+1.044826416 container remove 8924987769b083d2730a0954ee8da590a058b3c53bd61b0e9948e7a2511a5565 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:49:07 np0005545273 systemd[1]: libpod-conmon-8924987769b083d2730a0954ee8da590a058b3c53bd61b0e9948e7a2511a5565.scope: Deactivated successfully.
Dec  4 05:49:08 np0005545273 podman[258062]: 2025-12-04 10:49:08.402823106 +0000 UTC m=+0.041895679 container create 9c4925b96bb15dcb8fed627d18a9963806fd94b446e36535d3813e54c3dad335 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cerf, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec  4 05:49:08 np0005545273 systemd[1]: Started libpod-conmon-9c4925b96bb15dcb8fed627d18a9963806fd94b446e36535d3813e54c3dad335.scope.
Dec  4 05:49:08 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:49:08 np0005545273 podman[258062]: 2025-12-04 10:49:08.383915593 +0000 UTC m=+0.022988196 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:49:08 np0005545273 podman[258062]: 2025-12-04 10:49:08.485135196 +0000 UTC m=+0.124207769 container init 9c4925b96bb15dcb8fed627d18a9963806fd94b446e36535d3813e54c3dad335 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cerf, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:49:08 np0005545273 podman[258062]: 2025-12-04 10:49:08.491758488 +0000 UTC m=+0.130831061 container start 9c4925b96bb15dcb8fed627d18a9963806fd94b446e36535d3813e54c3dad335 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  4 05:49:08 np0005545273 podman[258062]: 2025-12-04 10:49:08.495379067 +0000 UTC m=+0.134451640 container attach 9c4925b96bb15dcb8fed627d18a9963806fd94b446e36535d3813e54c3dad335 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cerf, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:49:08 np0005545273 loving_cerf[258080]: 167 167
Dec  4 05:49:08 np0005545273 systemd[1]: libpod-9c4925b96bb15dcb8fed627d18a9963806fd94b446e36535d3813e54c3dad335.scope: Deactivated successfully.
Dec  4 05:49:08 np0005545273 podman[258062]: 2025-12-04 10:49:08.497599702 +0000 UTC m=+0.136672275 container died 9c4925b96bb15dcb8fed627d18a9963806fd94b446e36535d3813e54c3dad335 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  4 05:49:08 np0005545273 systemd[1]: var-lib-containers-storage-overlay-0e894b3b78cfd64c477fe0049adf76dceda3722fceb97511d8ee60944f6dbb8f-merged.mount: Deactivated successfully.
Dec  4 05:49:08 np0005545273 podman[258062]: 2025-12-04 10:49:08.534077756 +0000 UTC m=+0.173150359 container remove 9c4925b96bb15dcb8fed627d18a9963806fd94b446e36535d3813e54c3dad335 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  4 05:49:08 np0005545273 systemd[1]: libpod-conmon-9c4925b96bb15dcb8fed627d18a9963806fd94b446e36535d3813e54c3dad335.scope: Deactivated successfully.
Dec  4 05:49:08 np0005545273 podman[258104]: 2025-12-04 10:49:08.703842261 +0000 UTC m=+0.043897677 container create 422ca9417af76f6488ef975e1924bea90e6f43f7630948824eb38bf926b810fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec  4 05:49:08 np0005545273 systemd[1]: Started libpod-conmon-422ca9417af76f6488ef975e1924bea90e6f43f7630948824eb38bf926b810fc.scope.
Dec  4 05:49:08 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:49:08 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36eb8b83f9b5ac8c8d9ee64bde5e8a04d73df8bd3b4034179d4abd207919babb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:49:08 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36eb8b83f9b5ac8c8d9ee64bde5e8a04d73df8bd3b4034179d4abd207919babb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:49:08 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36eb8b83f9b5ac8c8d9ee64bde5e8a04d73df8bd3b4034179d4abd207919babb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:49:08 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36eb8b83f9b5ac8c8d9ee64bde5e8a04d73df8bd3b4034179d4abd207919babb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:49:08 np0005545273 podman[258104]: 2025-12-04 10:49:08.685413969 +0000 UTC m=+0.025469295 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:49:08 np0005545273 podman[258104]: 2025-12-04 10:49:08.791975484 +0000 UTC m=+0.132030810 container init 422ca9417af76f6488ef975e1924bea90e6f43f7630948824eb38bf926b810fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  4 05:49:08 np0005545273 podman[258104]: 2025-12-04 10:49:08.799205711 +0000 UTC m=+0.139261017 container start 422ca9417af76f6488ef975e1924bea90e6f43f7630948824eb38bf926b810fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:49:08 np0005545273 podman[258104]: 2025-12-04 10:49:08.802345009 +0000 UTC m=+0.142400345 container attach 422ca9417af76f6488ef975e1924bea90e6f43f7630948824eb38bf926b810fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_hopper, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec  4 05:49:09 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7384c38f-046a-4732-911b-7fca953ef69a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:49:09 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7384c38f-046a-4732-911b-7fca953ef69a, vol_name:cephfs) < ""
Dec  4 05:49:09 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/7384c38f-046a-4732-911b-7fca953ef69a/16e840ce-ed12-467f-88c3-048d9d944422'.
Dec  4 05:49:09 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7384c38f-046a-4732-911b-7fca953ef69a/.meta.tmp'
Dec  4 05:49:09 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7384c38f-046a-4732-911b-7fca953ef69a/.meta.tmp' to config b'/volumes/_nogroup/7384c38f-046a-4732-911b-7fca953ef69a/.meta'
Dec  4 05:49:09 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7384c38f-046a-4732-911b-7fca953ef69a, vol_name:cephfs) < ""
Dec  4 05:49:09 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7384c38f-046a-4732-911b-7fca953ef69a", "format": "json"}]: dispatch
Dec  4 05:49:09 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7384c38f-046a-4732-911b-7fca953ef69a, vol_name:cephfs) < ""
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]: {
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:    "0": [
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:        {
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            "devices": [
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "/dev/loop3"
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            ],
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            "lv_name": "ceph_lv0",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            "lv_size": "21470642176",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            "name": "ceph_lv0",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            "tags": {
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.cluster_name": "ceph",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.crush_device_class": "",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.encrypted": "0",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.objectstore": "bluestore",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.osd_id": "0",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.type": "block",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.vdo": "0",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.with_tpm": "0"
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            },
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            "type": "block",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            "vg_name": "ceph_vg0"
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:        }
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:    ],
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:    "1": [
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:        {
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            "devices": [
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "/dev/loop4"
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            ],
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            "lv_name": "ceph_lv1",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            "lv_size": "21470642176",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            "name": "ceph_lv1",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            "tags": {
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.cluster_name": "ceph",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.crush_device_class": "",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.encrypted": "0",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.objectstore": "bluestore",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.osd_id": "1",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.type": "block",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.vdo": "0",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.with_tpm": "0"
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            },
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            "type": "block",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            "vg_name": "ceph_vg1"
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:        }
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:    ],
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:    "2": [
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:        {
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            "devices": [
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "/dev/loop5"
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            ],
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            "lv_name": "ceph_lv2",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            "lv_size": "21470642176",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            "name": "ceph_lv2",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            "tags": {
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.cluster_name": "ceph",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.crush_device_class": "",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.encrypted": "0",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.objectstore": "bluestore",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.osd_id": "2",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.type": "block",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.vdo": "0",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:                "ceph.with_tpm": "0"
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            },
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            "type": "block",
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:            "vg_name": "ceph_vg2"
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:        }
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]:    ]
Dec  4 05:49:09 np0005545273 heuristic_hopper[258121]: }
Dec  4 05:49:09 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7384c38f-046a-4732-911b-7fca953ef69a, vol_name:cephfs) < ""
Dec  4 05:49:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:49:09 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:49:09 np0005545273 systemd[1]: libpod-422ca9417af76f6488ef975e1924bea90e6f43f7630948824eb38bf926b810fc.scope: Deactivated successfully.
Dec  4 05:49:09 np0005545273 podman[258104]: 2025-12-04 10:49:09.09546367 +0000 UTC m=+0.435518976 container died 422ca9417af76f6488ef975e1924bea90e6f43f7630948824eb38bf926b810fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  4 05:49:09 np0005545273 systemd[1]: var-lib-containers-storage-overlay-36eb8b83f9b5ac8c8d9ee64bde5e8a04d73df8bd3b4034179d4abd207919babb-merged.mount: Deactivated successfully.
Dec  4 05:49:09 np0005545273 podman[258104]: 2025-12-04 10:49:09.139446499 +0000 UTC m=+0.479501815 container remove 422ca9417af76f6488ef975e1924bea90e6f43f7630948824eb38bf926b810fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_hopper, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Dec  4 05:49:09 np0005545273 systemd[1]: libpod-conmon-422ca9417af76f6488ef975e1924bea90e6f43f7630948824eb38bf926b810fc.scope: Deactivated successfully.
Dec  4 05:49:09 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1193: 321 pgs: 321 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 60 KiB/s wr, 3 op/s
Dec  4 05:49:09 np0005545273 podman[258204]: 2025-12-04 10:49:09.588825204 +0000 UTC m=+0.044671876 container create ea04c84d09c1be28b0368d1d3deeab726be80ea7f85004aa48acdfa0aa30ec62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_khorana, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:49:09 np0005545273 systemd[1]: Started libpod-conmon-ea04c84d09c1be28b0368d1d3deeab726be80ea7f85004aa48acdfa0aa30ec62.scope.
Dec  4 05:49:09 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:49:09 np0005545273 podman[258204]: 2025-12-04 10:49:09.568859144 +0000 UTC m=+0.024705826 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:49:09 np0005545273 podman[258204]: 2025-12-04 10:49:09.67465493 +0000 UTC m=+0.130501602 container init ea04c84d09c1be28b0368d1d3deeab726be80ea7f85004aa48acdfa0aa30ec62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_khorana, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  4 05:49:09 np0005545273 podman[258204]: 2025-12-04 10:49:09.682454811 +0000 UTC m=+0.138301463 container start ea04c84d09c1be28b0368d1d3deeab726be80ea7f85004aa48acdfa0aa30ec62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_khorana, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:49:09 np0005545273 podman[258204]: 2025-12-04 10:49:09.685394453 +0000 UTC m=+0.141241135 container attach ea04c84d09c1be28b0368d1d3deeab726be80ea7f85004aa48acdfa0aa30ec62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:49:09 np0005545273 angry_khorana[258220]: 167 167
Dec  4 05:49:09 np0005545273 systemd[1]: libpod-ea04c84d09c1be28b0368d1d3deeab726be80ea7f85004aa48acdfa0aa30ec62.scope: Deactivated successfully.
Dec  4 05:49:09 np0005545273 conmon[258220]: conmon ea04c84d09c1be28b036 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ea04c84d09c1be28b0368d1d3deeab726be80ea7f85004aa48acdfa0aa30ec62.scope/container/memory.events
Dec  4 05:49:09 np0005545273 podman[258204]: 2025-12-04 10:49:09.690862317 +0000 UTC m=+0.146708969 container died ea04c84d09c1be28b0368d1d3deeab726be80ea7f85004aa48acdfa0aa30ec62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_khorana, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Dec  4 05:49:09 np0005545273 systemd[1]: var-lib-containers-storage-overlay-c6a80d9091f3f8819094a02fbf5383d4a3ac1c5a9f310681c55bb472fe4a86c6-merged.mount: Deactivated successfully.
Dec  4 05:49:09 np0005545273 podman[258204]: 2025-12-04 10:49:09.725285282 +0000 UTC m=+0.181131934 container remove ea04c84d09c1be28b0368d1d3deeab726be80ea7f85004aa48acdfa0aa30ec62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_khorana, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:49:09 np0005545273 systemd[1]: libpod-conmon-ea04c84d09c1be28b0368d1d3deeab726be80ea7f85004aa48acdfa0aa30ec62.scope: Deactivated successfully.
Dec  4 05:49:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:49:09 np0005545273 podman[258245]: 2025-12-04 10:49:09.878082551 +0000 UTC m=+0.041945351 container create 2a87d54479b06d7d0d167bc662fb7fdedc9fb74df2dc7a9c956bbc40d4fe6f5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_khayyam, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:49:09 np0005545273 systemd[1]: Started libpod-conmon-2a87d54479b06d7d0d167bc662fb7fdedc9fb74df2dc7a9c956bbc40d4fe6f5c.scope.
Dec  4 05:49:09 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:49:09 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2fb4821d2ff69d3f8a96e1e1be18926a63f0e45c8e6fee1312e90469ac3dc8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:49:09 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2fb4821d2ff69d3f8a96e1e1be18926a63f0e45c8e6fee1312e90469ac3dc8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:49:09 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2fb4821d2ff69d3f8a96e1e1be18926a63f0e45c8e6fee1312e90469ac3dc8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:49:09 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2fb4821d2ff69d3f8a96e1e1be18926a63f0e45c8e6fee1312e90469ac3dc8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:49:09 np0005545273 podman[258245]: 2025-12-04 10:49:09.947954615 +0000 UTC m=+0.111817425 container init 2a87d54479b06d7d0d167bc662fb7fdedc9fb74df2dc7a9c956bbc40d4fe6f5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_khayyam, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:49:09 np0005545273 podman[258245]: 2025-12-04 10:49:09.859162807 +0000 UTC m=+0.023025637 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:49:09 np0005545273 podman[258245]: 2025-12-04 10:49:09.955423248 +0000 UTC m=+0.119286058 container start 2a87d54479b06d7d0d167bc662fb7fdedc9fb74df2dc7a9c956bbc40d4fe6f5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2)
Dec  4 05:49:09 np0005545273 podman[258245]: 2025-12-04 10:49:09.958886943 +0000 UTC m=+0.122749753 container attach 2a87d54479b06d7d0d167bc662fb7fdedc9fb74df2dc7a9c956bbc40d4fe6f5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_khayyam, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:49:10 np0005545273 lvm[258342]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:49:10 np0005545273 lvm[258342]: VG ceph_vg1 finished
Dec  4 05:49:10 np0005545273 lvm[258341]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:49:10 np0005545273 lvm[258341]: VG ceph_vg0 finished
Dec  4 05:49:10 np0005545273 lvm[258344]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:49:10 np0005545273 lvm[258344]: VG ceph_vg2 finished
Dec  4 05:49:10 np0005545273 cranky_khayyam[258262]: {}
Dec  4 05:49:10 np0005545273 podman[258245]: 2025-12-04 10:49:10.808004735 +0000 UTC m=+0.971867535 container died 2a87d54479b06d7d0d167bc662fb7fdedc9fb74df2dc7a9c956bbc40d4fe6f5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:49:10 np0005545273 systemd[1]: libpod-2a87d54479b06d7d0d167bc662fb7fdedc9fb74df2dc7a9c956bbc40d4fe6f5c.scope: Deactivated successfully.
Dec  4 05:49:10 np0005545273 systemd[1]: libpod-2a87d54479b06d7d0d167bc662fb7fdedc9fb74df2dc7a9c956bbc40d4fe6f5c.scope: Consumed 1.341s CPU time.
Dec  4 05:49:10 np0005545273 systemd[1]: var-lib-containers-storage-overlay-b2fb4821d2ff69d3f8a96e1e1be18926a63f0e45c8e6fee1312e90469ac3dc8f-merged.mount: Deactivated successfully.
Dec  4 05:49:10 np0005545273 podman[258245]: 2025-12-04 10:49:10.939262225 +0000 UTC m=+1.103125035 container remove 2a87d54479b06d7d0d167bc662fb7fdedc9fb74df2dc7a9c956bbc40d4fe6f5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_khayyam, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Dec  4 05:49:10 np0005545273 systemd[1]: libpod-conmon-2a87d54479b06d7d0d167bc662fb7fdedc9fb74df2dc7a9c956bbc40d4fe6f5c.scope: Deactivated successfully.
Dec  4 05:49:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:49:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:49:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:49:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:49:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  4 05:49:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1229014359' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec  4 05:49:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  4 05:49:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1229014359' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec  4 05:49:11 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1194: 321 pgs: 321 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 60 KiB/s wr, 4 op/s
Dec  4 05:49:12 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:49:12 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:49:12 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7384c38f-046a-4732-911b-7fca953ef69a", "format": "json"}]: dispatch
Dec  4 05:49:12 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7384c38f-046a-4732-911b-7fca953ef69a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:49:12 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7384c38f-046a-4732-911b-7fca953ef69a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:49:12 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7384c38f-046a-4732-911b-7fca953ef69a' of type subvolume
Dec  4 05:49:12 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:49:12.606+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7384c38f-046a-4732-911b-7fca953ef69a' of type subvolume
Dec  4 05:49:12 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7384c38f-046a-4732-911b-7fca953ef69a", "force": true, "format": "json"}]: dispatch
Dec  4 05:49:12 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7384c38f-046a-4732-911b-7fca953ef69a, vol_name:cephfs) < ""
Dec  4 05:49:12 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7384c38f-046a-4732-911b-7fca953ef69a'' moved to trashcan
Dec  4 05:49:12 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:49:12 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7384c38f-046a-4732-911b-7fca953ef69a, vol_name:cephfs) < ""
Dec  4 05:49:13 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1195: 321 pgs: 321 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 59 KiB/s wr, 4 op/s
Dec  4 05:49:13 np0005545273 podman[258386]: 2025-12-04 10:49:13.965631374 +0000 UTC m=+0.066056621 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  4 05:49:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:49:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Dec  4 05:49:15 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Dec  4 05:49:15 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Dec  4 05:49:15 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1197: 321 pgs: 321 active+clean; 74 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 433 B/s rd, 62 KiB/s wr, 5 op/s
Dec  4 05:49:16 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "382512d2-4ae6-4a25-96be-5898161f749d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:49:16 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:382512d2-4ae6-4a25-96be-5898161f749d, vol_name:cephfs) < ""
Dec  4 05:49:16 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/382512d2-4ae6-4a25-96be-5898161f749d/cb682cf0-c1e6-441d-935e-9c8f78e43725'.
Dec  4 05:49:16 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/382512d2-4ae6-4a25-96be-5898161f749d/.meta.tmp'
Dec  4 05:49:16 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/382512d2-4ae6-4a25-96be-5898161f749d/.meta.tmp' to config b'/volumes/_nogroup/382512d2-4ae6-4a25-96be-5898161f749d/.meta'
Dec  4 05:49:16 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:382512d2-4ae6-4a25-96be-5898161f749d, vol_name:cephfs) < ""
Dec  4 05:49:16 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "382512d2-4ae6-4a25-96be-5898161f749d", "format": "json"}]: dispatch
Dec  4 05:49:16 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:382512d2-4ae6-4a25-96be-5898161f749d, vol_name:cephfs) < ""
Dec  4 05:49:16 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:382512d2-4ae6-4a25-96be-5898161f749d, vol_name:cephfs) < ""
Dec  4 05:49:16 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:49:16 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:49:17 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1198: 321 pgs: 321 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 63 KiB/s wr, 5 op/s
Dec  4 05:49:19 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "382512d2-4ae6-4a25-96be-5898161f749d", "snap_name": "24077abd-b36a-49fd-87f6-98a6b2f3bbce", "format": "json"}]: dispatch
Dec  4 05:49:19 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:24077abd-b36a-49fd-87f6-98a6b2f3bbce, sub_name:382512d2-4ae6-4a25-96be-5898161f749d, vol_name:cephfs) < ""
Dec  4 05:49:19 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:24077abd-b36a-49fd-87f6-98a6b2f3bbce, sub_name:382512d2-4ae6-4a25-96be-5898161f749d, vol_name:cephfs) < ""
Dec  4 05:49:19 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1199: 321 pgs: 321 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 63 KiB/s wr, 5 op/s
Dec  4 05:49:19 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:49:21 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1200: 321 pgs: 321 active+clean; 74 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 63 KiB/s wr, 5 op/s
Dec  4 05:49:22 np0005545273 podman[258408]: 2025-12-04 10:49:22.941981724 +0000 UTC m=+0.049255689 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:49:22 np0005545273 podman[258407]: 2025-12-04 10:49:22.973439126 +0000 UTC m=+0.080810694 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  4 05:49:23 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "382512d2-4ae6-4a25-96be-5898161f749d", "snap_name": "24077abd-b36a-49fd-87f6-98a6b2f3bbce", "target_sub_name": "98772187-8e17-49bc-bf03-9548a140f0f9", "format": "json"}]: dispatch
Dec  4 05:49:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:24077abd-b36a-49fd-87f6-98a6b2f3bbce, sub_name:382512d2-4ae6-4a25-96be-5898161f749d, target_sub_name:98772187-8e17-49bc-bf03-9548a140f0f9, vol_name:cephfs) < ""
Dec  4 05:49:23 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/98772187-8e17-49bc-bf03-9548a140f0f9/423a692f-d7d1-49c7-ba07-ce101229d3f2'.
Dec  4 05:49:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/98772187-8e17-49bc-bf03-9548a140f0f9/.meta.tmp'
Dec  4 05:49:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/98772187-8e17-49bc-bf03-9548a140f0f9/.meta.tmp' to config b'/volumes/_nogroup/98772187-8e17-49bc-bf03-9548a140f0f9/.meta'
Dec  4 05:49:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.clone_index] tracking-id cb274a61-bac4-4985-8c30-8cdb47d7bbd3 for path b'/volumes/_nogroup/98772187-8e17-49bc-bf03-9548a140f0f9'
Dec  4 05:49:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/382512d2-4ae6-4a25-96be-5898161f749d/.meta.tmp'
Dec  4 05:49:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/382512d2-4ae6-4a25-96be-5898161f749d/.meta.tmp' to config b'/volumes/_nogroup/382512d2-4ae6-4a25-96be-5898161f749d/.meta'
Dec  4 05:49:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:49:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.stats_util] initiating progress reporting for clones...
Dec  4 05:49:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.stats_util] progress reporting for clones has been initiated
Dec  4 05:49:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:24077abd-b36a-49fd-87f6-98a6b2f3bbce, sub_name:382512d2-4ae6-4a25-96be-5898161f749d, target_sub_name:98772187-8e17-49bc-bf03-9548a140f0f9, vol_name:cephfs) < ""
Dec  4 05:49:23 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "98772187-8e17-49bc-bf03-9548a140f0f9", "format": "json"}]: dispatch
Dec  4 05:49:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:98772187-8e17-49bc-bf03-9548a140f0f9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:49:23 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:49:23.241+0000 7f8429ca1640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:49:23 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:49:23 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:49:23.241+0000 7f8429ca1640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:49:23 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:49:23 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:49:23.241+0000 7f8429ca1640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:49:23 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:49:23 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:49:23.241+0000 7f8429ca1640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:49:23 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:49:23 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:49:23.241+0000 7f8429ca1640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:49:23 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:49:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:98772187-8e17-49bc-bf03-9548a140f0f9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:49:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/98772187-8e17-49bc-bf03-9548a140f0f9
Dec  4 05:49:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, 98772187-8e17-49bc-bf03-9548a140f0f9)
Dec  4 05:49:23 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:49:23.258+0000 7f842849e640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:49:23 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:49:23 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:49:23.258+0000 7f842849e640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:49:23 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:49:23 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:49:23.258+0000 7f842849e640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:49:23 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:49:23 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:49:23.258+0000 7f842849e640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:49:23 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:49:23 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:49:23.258+0000 7f842849e640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:49:23 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:49:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, 98772187-8e17-49bc-bf03-9548a140f0f9) -- by 0 seconds
Dec  4 05:49:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/98772187-8e17-49bc-bf03-9548a140f0f9/.meta.tmp'
Dec  4 05:49:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/98772187-8e17-49bc-bf03-9548a140f0f9/.meta.tmp' to config b'/volumes/_nogroup/98772187-8e17-49bc-bf03-9548a140f0f9/.meta'
Dec  4 05:49:23 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1201: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 54 KiB/s wr, 3 op/s
Dec  4 05:49:23 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.iwufnj(active, since 34m)
Dec  4 05:49:24 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:49:24.244+0000 7f83fd17a640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:49:24 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:49:24 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:49:24.244+0000 7f83fd17a640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:49:24 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:49:24 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:49:24.244+0000 7f83fd17a640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:49:24 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:49:24 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:49:24.244+0000 7f83fd17a640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:49:24 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:49:24 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:49:24.244+0000 7f83fd17a640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:49:24 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:49:24 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/382512d2-4ae6-4a25-96be-5898161f749d/.snap/24077abd-b36a-49fd-87f6-98a6b2f3bbce/cb682cf0-c1e6-441d-935e-9c8f78e43725' to b'/volumes/_nogroup/98772187-8e17-49bc-bf03-9548a140f0f9/423a692f-d7d1-49c7-ba07-ce101229d3f2'
Dec  4 05:49:24 np0005545273 ceph-mgr[75651]: [progress INFO root] update: starting ev mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%)
Dec  4 05:49:24 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/98772187-8e17-49bc-bf03-9548a140f0f9/.meta.tmp'
Dec  4 05:49:24 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/98772187-8e17-49bc-bf03-9548a140f0f9/.meta.tmp' to config b'/volumes/_nogroup/98772187-8e17-49bc-bf03-9548a140f0f9/.meta'
Dec  4 05:49:24 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.clone_index] untracking cb274a61-bac4-4985-8c30-8cdb47d7bbd3
Dec  4 05:49:24 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/382512d2-4ae6-4a25-96be-5898161f749d/.meta.tmp'
Dec  4 05:49:24 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/382512d2-4ae6-4a25-96be-5898161f749d/.meta.tmp' to config b'/volumes/_nogroup/382512d2-4ae6-4a25-96be-5898161f749d/.meta'
Dec  4 05:49:24 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/98772187-8e17-49bc-bf03-9548a140f0f9/.meta.tmp'
Dec  4 05:49:24 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/98772187-8e17-49bc-bf03-9548a140f0f9/.meta.tmp' to config b'/volumes/_nogroup/98772187-8e17-49bc-bf03-9548a140f0f9/.meta'
Dec  4 05:49:24 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, 98772187-8e17-49bc-bf03-9548a140f0f9)
Dec  4 05:49:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:49:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.stats_util] removing progress bars from "ceph status" output
Dec  4 05:49:25 np0005545273 ceph-mgr[75651]: [progress INFO root] complete: finished ev mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%)
Dec  4 05:49:25 np0005545273 ceph-mgr[75651]: [progress INFO root] Completed event mgr-vol-ongoing-clones (1 ongoing clones - average progress is 0.0%) in 1 seconds
Dec  4 05:49:25 np0005545273 ceph-mgr[75651]: [progress WARNING root] complete: ev mgr-vol-total-clones does not exist
Dec  4 05:49:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.stats_util] finished removing progress bars from "ceph status" output
Dec  4 05:49:25 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.stats_util] marking this RTimer thread as finished; thread object ID - <volumes.fs.stats_util.CloneProgressReporter object at 0x7f8435ce5760>
Dec  4 05:49:25 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1202: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 194 B/s rd, 51 KiB/s wr, 3 op/s
Dec  4 05:49:25 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.iwufnj(active, since 34m)
Dec  4 05:49:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:49:26
Dec  4 05:49:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:49:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:49:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', '.mgr', 'default.rgw.control', '.rgw.root', 'backups', 'cephfs.cephfs.data', 'volumes', 'images']
Dec  4 05:49:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:49:27 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1203: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 80 KiB/s wr, 7 op/s
Dec  4 05:49:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:49:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:49:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:49:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:49:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:49:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:49:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:49:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:49:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:49:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:49:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:49:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:49:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:49:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:49:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:49:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:49:29 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1204: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 57 KiB/s wr, 5 op/s
Dec  4 05:49:29 np0005545273 ceph-mgr[75651]: [progress INFO root] Writing back 18 completed events
Dec  4 05:49:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  4 05:49:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:49:29 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:49:30 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:49:31 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1205: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 57 KiB/s wr, 6 op/s
Dec  4 05:49:33 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1206: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 67 KiB/s wr, 6 op/s
Dec  4 05:49:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:49:35 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1207: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 45 KiB/s wr, 5 op/s
Dec  4 05:49:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:49:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:49:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:49:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:49:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:49:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:49:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:49:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:49:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:49:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:49:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006660926060230384 of space, bias 1.0, pg target 0.19982778180691152 quantized to 32 (current 32)
Dec  4 05:49:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:49:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005518787578034542 of space, bias 4.0, pg target 0.6622545093641451 quantized to 16 (current 32)
Dec  4 05:49:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:49:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Dec  4 05:49:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:49:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:49:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:49:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 05:49:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:49:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:49:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:49:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 05:49:37 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1208: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 45 KiB/s wr, 5 op/s
Dec  4 05:49:39 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1209: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 9.7 KiB/s wr, 1 op/s
Dec  4 05:49:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:49:41 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1210: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 9.7 KiB/s wr, 1 op/s
Dec  4 05:49:43 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1211: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 9.5 KiB/s wr, 0 op/s
Dec  4 05:49:44 np0005545273 nova_compute[244644]: 2025-12-04 10:49:44.362 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:49:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:49:44 np0005545273 podman[258488]: 2025-12-04 10:49:44.948428405 +0000 UTC m=+0.053263182 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125)
Dec  4 05:49:45 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1212: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:49:47 np0005545273 nova_compute[244644]: 2025-12-04 10:49:47.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:49:47 np0005545273 nova_compute[244644]: 2025-12-04 10:49:47.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  4 05:49:47 np0005545273 nova_compute[244644]: 2025-12-04 10:49:47.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  4 05:49:47 np0005545273 nova_compute[244644]: 2025-12-04 10:49:47.356 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  4 05:49:47 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1213: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:49:48 np0005545273 nova_compute[244644]: 2025-12-04 10:49:48.351 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:49:49 np0005545273 nova_compute[244644]: 2025-12-04 10:49:49.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:49:49 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1214: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:49:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:49:50 np0005545273 nova_compute[244644]: 2025-12-04 10:49:50.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:49:50 np0005545273 nova_compute[244644]: 2025-12-04 10:49:50.361 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:49:50 np0005545273 nova_compute[244644]: 2025-12-04 10:49:50.362 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:49:50 np0005545273 nova_compute[244644]: 2025-12-04 10:49:50.362 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:49:50 np0005545273 nova_compute[244644]: 2025-12-04 10:49:50.362 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  4 05:49:50 np0005545273 nova_compute[244644]: 2025-12-04 10:49:50.362 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:49:50 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:49:50 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3131830048' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:49:50 np0005545273 nova_compute[244644]: 2025-12-04 10:49:50.902 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:49:51 np0005545273 nova_compute[244644]: 2025-12-04 10:49:51.052 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  4 05:49:51 np0005545273 nova_compute[244644]: 2025-12-04 10:49:51.054 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5005MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  4 05:49:51 np0005545273 nova_compute[244644]: 2025-12-04 10:49:51.054 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:49:51 np0005545273 nova_compute[244644]: 2025-12-04 10:49:51.054 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:49:51 np0005545273 nova_compute[244644]: 2025-12-04 10:49:51.117 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  4 05:49:51 np0005545273 nova_compute[244644]: 2025-12-04 10:49:51.118 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  4 05:49:51 np0005545273 nova_compute[244644]: 2025-12-04 10:49:51.139 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:49:51 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1215: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:49:51 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:49:51 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3368910182' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:49:51 np0005545273 nova_compute[244644]: 2025-12-04 10:49:51.682 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:49:51 np0005545273 nova_compute[244644]: 2025-12-04 10:49:51.689 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  4 05:49:51 np0005545273 nova_compute[244644]: 2025-12-04 10:49:51.703 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  4 05:49:51 np0005545273 nova_compute[244644]: 2025-12-04 10:49:51.705 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  4 05:49:51 np0005545273 nova_compute[244644]: 2025-12-04 10:49:51.705 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:49:53 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1216: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:49:53 np0005545273 nova_compute[244644]: 2025-12-04 10:49:53.706 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:49:53 np0005545273 nova_compute[244644]: 2025-12-04 10:49:53.706 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:49:53 np0005545273 nova_compute[244644]: 2025-12-04 10:49:53.706 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:49:53 np0005545273 nova_compute[244644]: 2025-12-04 10:49:53.707 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  4 05:49:53 np0005545273 podman[258557]: 2025-12-04 10:49:53.968372472 +0000 UTC m=+0.069156713 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  4 05:49:54 np0005545273 podman[258556]: 2025-12-04 10:49:54.002806109 +0000 UTC m=+0.106434691 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  4 05:49:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:49:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:49:54.918 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:49:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:49:54.919 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:49:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:49:54.919 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:49:55 np0005545273 nova_compute[244644]: 2025-12-04 10:49:55.334 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:49:55 np0005545273 nova_compute[244644]: 2025-12-04 10:49:55.337 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:49:55 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1217: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:49:57 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1218: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:49:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:49:57 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:49:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:49:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:49:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:49:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:49:59 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1219: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:50:00 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:50:01 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1220: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:50:03 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1221: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:50:05 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:50:05 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1222: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:50:07 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1223: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:50:09 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1224: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:50:10 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:50:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  4 05:50:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2555593512' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec  4 05:50:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  4 05:50:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2555593512' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec  4 05:50:11 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1225: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:50:12 np0005545273 podman[258698]: 2025-12-04 10:50:12.085836236 +0000 UTC m=+0.369367011 container exec 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Dec  4 05:50:12 np0005545273 podman[258698]: 2025-12-04 10:50:12.242614384 +0000 UTC m=+0.526145169 container exec_died 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:50:13 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:50:13 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:50:13 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:50:13 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:50:13 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:50:13 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:50:13 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1226: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:50:13 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:50:13 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:50:13 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:50:13 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:50:13 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:50:14 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:50:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:50:14 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:50:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:50:14 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:50:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:50:14 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:50:14 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:50:14 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:50:14 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:50:14 np0005545273 podman[259031]: 2025-12-04 10:50:14.795645123 +0000 UTC m=+0.040031387 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:50:15 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:50:15 np0005545273 podman[259031]: 2025-12-04 10:50:15.365487216 +0000 UTC m=+0.609873390 container create f1143291fce683fc32d3f696211866a9791d8e1b596f12641024c835545d4262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_hertz, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:50:15 np0005545273 systemd[1]: Started libpod-conmon-f1143291fce683fc32d3f696211866a9791d8e1b596f12641024c835545d4262.scope.
Dec  4 05:50:15 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:50:15 np0005545273 podman[259031]: 2025-12-04 10:50:15.454284691 +0000 UTC m=+0.698670885 container init f1143291fce683fc32d3f696211866a9791d8e1b596f12641024c835545d4262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_hertz, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:50:15 np0005545273 podman[259031]: 2025-12-04 10:50:15.466089782 +0000 UTC m=+0.710475956 container start f1143291fce683fc32d3f696211866a9791d8e1b596f12641024c835545d4262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_hertz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:50:15 np0005545273 elastic_hertz[259049]: 167 167
Dec  4 05:50:15 np0005545273 systemd[1]: libpod-f1143291fce683fc32d3f696211866a9791d8e1b596f12641024c835545d4262.scope: Deactivated successfully.
Dec  4 05:50:15 np0005545273 podman[259031]: 2025-12-04 10:50:15.481694506 +0000 UTC m=+0.726080710 container attach f1143291fce683fc32d3f696211866a9791d8e1b596f12641024c835545d4262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:50:15 np0005545273 podman[259031]: 2025-12-04 10:50:15.483007298 +0000 UTC m=+0.727393472 container died f1143291fce683fc32d3f696211866a9791d8e1b596f12641024c835545d4262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_hertz, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:50:15 np0005545273 podman[259048]: 2025-12-04 10:50:15.483583052 +0000 UTC m=+0.069658324 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3)
Dec  4 05:50:15 np0005545273 systemd[1]: var-lib-containers-storage-overlay-af06cecb1a94850dc47d562b06c197aaff21740f3be592d5758060f1eb4e378a-merged.mount: Deactivated successfully.
Dec  4 05:50:15 np0005545273 podman[259031]: 2025-12-04 10:50:15.532123037 +0000 UTC m=+0.776509221 container remove f1143291fce683fc32d3f696211866a9791d8e1b596f12641024c835545d4262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_hertz, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:50:15 np0005545273 systemd[1]: libpod-conmon-f1143291fce683fc32d3f696211866a9791d8e1b596f12641024c835545d4262.scope: Deactivated successfully.
Dec  4 05:50:15 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1227: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:50:15 np0005545273 podman[259094]: 2025-12-04 10:50:15.716539266 +0000 UTC m=+0.025875519 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:50:15 np0005545273 podman[259094]: 2025-12-04 10:50:15.904706346 +0000 UTC m=+0.214042569 container create 38fca627ee81b6be44e2dbaaf8cd0b7cca01d8ec54d75437e1b524153ba27af2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_lamport, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:50:15 np0005545273 systemd[1]: Started libpod-conmon-38fca627ee81b6be44e2dbaaf8cd0b7cca01d8ec54d75437e1b524153ba27af2.scope.
Dec  4 05:50:15 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:50:16 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8efda55a1328c50d94e1e89e74b3dce40c8e62dd5b7db87c56471a356543a701/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:50:16 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8efda55a1328c50d94e1e89e74b3dce40c8e62dd5b7db87c56471a356543a701/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:50:16 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8efda55a1328c50d94e1e89e74b3dce40c8e62dd5b7db87c56471a356543a701/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:50:16 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8efda55a1328c50d94e1e89e74b3dce40c8e62dd5b7db87c56471a356543a701/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:50:16 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8efda55a1328c50d94e1e89e74b3dce40c8e62dd5b7db87c56471a356543a701/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:50:16 np0005545273 podman[259094]: 2025-12-04 10:50:16.095244925 +0000 UTC m=+0.404581248 container init 38fca627ee81b6be44e2dbaaf8cd0b7cca01d8ec54d75437e1b524153ba27af2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  4 05:50:16 np0005545273 podman[259094]: 2025-12-04 10:50:16.105153289 +0000 UTC m=+0.414489522 container start 38fca627ee81b6be44e2dbaaf8cd0b7cca01d8ec54d75437e1b524153ba27af2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec  4 05:50:16 np0005545273 podman[259094]: 2025-12-04 10:50:16.109885096 +0000 UTC m=+0.419221379 container attach 38fca627ee81b6be44e2dbaaf8cd0b7cca01d8ec54d75437e1b524153ba27af2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_lamport, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:50:16 np0005545273 clever_lamport[259110]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:50:16 np0005545273 clever_lamport[259110]: --> All data devices are unavailable
Dec  4 05:50:16 np0005545273 systemd[1]: libpod-38fca627ee81b6be44e2dbaaf8cd0b7cca01d8ec54d75437e1b524153ba27af2.scope: Deactivated successfully.
Dec  4 05:50:16 np0005545273 podman[259094]: 2025-12-04 10:50:16.670154703 +0000 UTC m=+0.979490976 container died 38fca627ee81b6be44e2dbaaf8cd0b7cca01d8ec54d75437e1b524153ba27af2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:50:17 np0005545273 systemd[1]: var-lib-containers-storage-overlay-8efda55a1328c50d94e1e89e74b3dce40c8e62dd5b7db87c56471a356543a701-merged.mount: Deactivated successfully.
Dec  4 05:50:17 np0005545273 podman[259094]: 2025-12-04 10:50:17.316544991 +0000 UTC m=+1.625881254 container remove 38fca627ee81b6be44e2dbaaf8cd0b7cca01d8ec54d75437e1b524153ba27af2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_lamport, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Dec  4 05:50:17 np0005545273 systemd[1]: libpod-conmon-38fca627ee81b6be44e2dbaaf8cd0b7cca01d8ec54d75437e1b524153ba27af2.scope: Deactivated successfully.
Dec  4 05:50:17 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1228: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:50:17 np0005545273 podman[259205]: 2025-12-04 10:50:17.852946982 +0000 UTC m=+0.029934288 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:50:18 np0005545273 podman[259205]: 2025-12-04 10:50:18.038311294 +0000 UTC m=+0.215298500 container create 5fa059524e5e668a5d001c862dce676232483b534b8618f628cc013fe617d5a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_cannon, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  4 05:50:18 np0005545273 systemd[1]: Started libpod-conmon-5fa059524e5e668a5d001c862dce676232483b534b8618f628cc013fe617d5a3.scope.
Dec  4 05:50:18 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:50:18 np0005545273 podman[259205]: 2025-12-04 10:50:18.154041102 +0000 UTC m=+0.331028348 container init 5fa059524e5e668a5d001c862dce676232483b534b8618f628cc013fe617d5a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:50:18 np0005545273 podman[259205]: 2025-12-04 10:50:18.165678768 +0000 UTC m=+0.342666014 container start 5fa059524e5e668a5d001c862dce676232483b534b8618f628cc013fe617d5a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_cannon, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:50:18 np0005545273 podman[259205]: 2025-12-04 10:50:18.171119082 +0000 UTC m=+0.348106388 container attach 5fa059524e5e668a5d001c862dce676232483b534b8618f628cc013fe617d5a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_cannon, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  4 05:50:18 np0005545273 systemd[1]: libpod-5fa059524e5e668a5d001c862dce676232483b534b8618f628cc013fe617d5a3.scope: Deactivated successfully.
Dec  4 05:50:18 np0005545273 xenodochial_cannon[259222]: 167 167
Dec  4 05:50:18 np0005545273 conmon[259222]: conmon 5fa059524e5e668a5d00 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5fa059524e5e668a5d001c862dce676232483b534b8618f628cc013fe617d5a3.scope/container/memory.events
Dec  4 05:50:18 np0005545273 podman[259205]: 2025-12-04 10:50:18.174875945 +0000 UTC m=+0.351863781 container died 5fa059524e5e668a5d001c862dce676232483b534b8618f628cc013fe617d5a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_cannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  4 05:50:18 np0005545273 systemd[1]: var-lib-containers-storage-overlay-cbf8146c4ce111efd43bebcfed531a73596b9e9cca077f3fd49c4b10c1f6420e-merged.mount: Deactivated successfully.
Dec  4 05:50:18 np0005545273 podman[259205]: 2025-12-04 10:50:18.237002193 +0000 UTC m=+0.413989429 container remove 5fa059524e5e668a5d001c862dce676232483b534b8618f628cc013fe617d5a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:50:18 np0005545273 systemd[1]: libpod-conmon-5fa059524e5e668a5d001c862dce676232483b534b8618f628cc013fe617d5a3.scope: Deactivated successfully.
Dec  4 05:50:18 np0005545273 podman[259245]: 2025-12-04 10:50:18.408310709 +0000 UTC m=+0.026954954 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:50:18 np0005545273 podman[259245]: 2025-12-04 10:50:18.673170667 +0000 UTC m=+0.291814882 container create 917ff093a75ab38a9ac7c59b763dfb77fef86c50cc9167debae5fea42df4c7af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  4 05:50:18 np0005545273 systemd[1]: Started libpod-conmon-917ff093a75ab38a9ac7c59b763dfb77fef86c50cc9167debae5fea42df4c7af.scope.
Dec  4 05:50:18 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:50:18 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5ed6a8f678f1ad9a55c902212ff9017cdc837ccebb6be865473dc9511f32230/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:50:18 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5ed6a8f678f1ad9a55c902212ff9017cdc837ccebb6be865473dc9511f32230/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:50:18 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5ed6a8f678f1ad9a55c902212ff9017cdc837ccebb6be865473dc9511f32230/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:50:18 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5ed6a8f678f1ad9a55c902212ff9017cdc837ccebb6be865473dc9511f32230/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:50:18 np0005545273 podman[259245]: 2025-12-04 10:50:18.792807542 +0000 UTC m=+0.411451767 container init 917ff093a75ab38a9ac7c59b763dfb77fef86c50cc9167debae5fea42df4c7af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec  4 05:50:18 np0005545273 podman[259245]: 2025-12-04 10:50:18.807323498 +0000 UTC m=+0.425967703 container start 917ff093a75ab38a9ac7c59b763dfb77fef86c50cc9167debae5fea42df4c7af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:50:18 np0005545273 podman[259245]: 2025-12-04 10:50:18.812622519 +0000 UTC m=+0.431266754 container attach 917ff093a75ab38a9ac7c59b763dfb77fef86c50cc9167debae5fea42df4c7af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]: {
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:    "0": [
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:        {
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            "devices": [
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "/dev/loop3"
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            ],
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            "lv_name": "ceph_lv0",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            "lv_size": "21470642176",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            "name": "ceph_lv0",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            "tags": {
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.cluster_name": "ceph",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.crush_device_class": "",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.encrypted": "0",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.objectstore": "bluestore",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.osd_id": "0",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.type": "block",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.vdo": "0",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.with_tpm": "0"
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            },
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            "type": "block",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            "vg_name": "ceph_vg0"
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:        }
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:    ],
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:    "1": [
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:        {
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            "devices": [
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "/dev/loop4"
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            ],
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            "lv_name": "ceph_lv1",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            "lv_size": "21470642176",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            "name": "ceph_lv1",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            "tags": {
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.cluster_name": "ceph",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.crush_device_class": "",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.encrypted": "0",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.objectstore": "bluestore",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.osd_id": "1",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.type": "block",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.vdo": "0",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.with_tpm": "0"
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            },
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            "type": "block",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            "vg_name": "ceph_vg1"
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:        }
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:    ],
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:    "2": [
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:        {
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            "devices": [
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "/dev/loop5"
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            ],
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            "lv_name": "ceph_lv2",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            "lv_size": "21470642176",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            "name": "ceph_lv2",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            "tags": {
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.cluster_name": "ceph",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.crush_device_class": "",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.encrypted": "0",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.objectstore": "bluestore",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.osd_id": "2",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.type": "block",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.vdo": "0",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:                "ceph.with_tpm": "0"
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            },
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            "type": "block",
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:            "vg_name": "ceph_vg2"
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:        }
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]:    ]
Dec  4 05:50:19 np0005545273 pedantic_perlman[259261]: }
Dec  4 05:50:19 np0005545273 systemd[1]: libpod-917ff093a75ab38a9ac7c59b763dfb77fef86c50cc9167debae5fea42df4c7af.scope: Deactivated successfully.
Dec  4 05:50:19 np0005545273 podman[259245]: 2025-12-04 10:50:19.139524613 +0000 UTC m=+0.758168808 container died 917ff093a75ab38a9ac7c59b763dfb77fef86c50cc9167debae5fea42df4c7af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_perlman, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  4 05:50:19 np0005545273 systemd[1]: var-lib-containers-storage-overlay-d5ed6a8f678f1ad9a55c902212ff9017cdc837ccebb6be865473dc9511f32230-merged.mount: Deactivated successfully.
Dec  4 05:50:19 np0005545273 podman[259245]: 2025-12-04 10:50:19.60507246 +0000 UTC m=+1.223716665 container remove 917ff093a75ab38a9ac7c59b763dfb77fef86c50cc9167debae5fea42df4c7af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_perlman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  4 05:50:19 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1229: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:50:19 np0005545273 systemd[1]: libpod-conmon-917ff093a75ab38a9ac7c59b763dfb77fef86c50cc9167debae5fea42df4c7af.scope: Deactivated successfully.
Dec  4 05:50:20 np0005545273 podman[259345]: 2025-12-04 10:50:20.138870757 +0000 UTC m=+0.082810409 container create 5d7609a3bcd5906711d1f5c3f3233a253ac9efaf60ee289ba8a37190edccab9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_brattain, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  4 05:50:20 np0005545273 podman[259345]: 2025-12-04 10:50:20.084679374 +0000 UTC m=+0.028619126 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:50:20 np0005545273 systemd[1]: Started libpod-conmon-5d7609a3bcd5906711d1f5c3f3233a253ac9efaf60ee289ba8a37190edccab9f.scope.
Dec  4 05:50:20 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:50:20 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:50:20 np0005545273 podman[259345]: 2025-12-04 10:50:20.360198033 +0000 UTC m=+0.304137705 container init 5d7609a3bcd5906711d1f5c3f3233a253ac9efaf60ee289ba8a37190edccab9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec  4 05:50:20 np0005545273 podman[259345]: 2025-12-04 10:50:20.368760574 +0000 UTC m=+0.312700226 container start 5d7609a3bcd5906711d1f5c3f3233a253ac9efaf60ee289ba8a37190edccab9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:50:20 np0005545273 suspicious_brattain[259361]: 167 167
Dec  4 05:50:20 np0005545273 systemd[1]: libpod-5d7609a3bcd5906711d1f5c3f3233a253ac9efaf60ee289ba8a37190edccab9f.scope: Deactivated successfully.
Dec  4 05:50:20 np0005545273 podman[259345]: 2025-12-04 10:50:20.381023816 +0000 UTC m=+0.324963488 container attach 5d7609a3bcd5906711d1f5c3f3233a253ac9efaf60ee289ba8a37190edccab9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec  4 05:50:20 np0005545273 podman[259345]: 2025-12-04 10:50:20.381520079 +0000 UTC m=+0.325459731 container died 5d7609a3bcd5906711d1f5c3f3233a253ac9efaf60ee289ba8a37190edccab9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_brattain, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:50:20 np0005545273 systemd[1]: var-lib-containers-storage-overlay-1e228dfa4a1f310fb2fbd7b4a75dfa1743499162b01e4de6c44613f93fe7163a-merged.mount: Deactivated successfully.
Dec  4 05:50:20 np0005545273 podman[259345]: 2025-12-04 10:50:20.430252477 +0000 UTC m=+0.374192139 container remove 5d7609a3bcd5906711d1f5c3f3233a253ac9efaf60ee289ba8a37190edccab9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_brattain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:50:20 np0005545273 systemd[1]: libpod-conmon-5d7609a3bcd5906711d1f5c3f3233a253ac9efaf60ee289ba8a37190edccab9f.scope: Deactivated successfully.
Dec  4 05:50:20 np0005545273 podman[259387]: 2025-12-04 10:50:20.577442129 +0000 UTC m=+0.025874607 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:50:20 np0005545273 podman[259387]: 2025-12-04 10:50:20.999439255 +0000 UTC m=+0.447871703 container create 8ed512c956f7feeaf4debf218704902db01e52ed0859dd6e6216323b1f47a694 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:50:21 np0005545273 systemd[1]: Started libpod-conmon-8ed512c956f7feeaf4debf218704902db01e52ed0859dd6e6216323b1f47a694.scope.
Dec  4 05:50:21 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:50:21 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba72a9dc2d464883af7598e95b320134fba61bc0732adf96d407eac20e8695f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:50:21 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba72a9dc2d464883af7598e95b320134fba61bc0732adf96d407eac20e8695f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:50:21 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba72a9dc2d464883af7598e95b320134fba61bc0732adf96d407eac20e8695f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:50:21 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba72a9dc2d464883af7598e95b320134fba61bc0732adf96d407eac20e8695f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:50:21 np0005545273 podman[259387]: 2025-12-04 10:50:21.097000076 +0000 UTC m=+0.545432544 container init 8ed512c956f7feeaf4debf218704902db01e52ed0859dd6e6216323b1f47a694 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_hamilton, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:50:21 np0005545273 podman[259387]: 2025-12-04 10:50:21.106476559 +0000 UTC m=+0.554908987 container start 8ed512c956f7feeaf4debf218704902db01e52ed0859dd6e6216323b1f47a694 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_hamilton, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:50:21 np0005545273 podman[259387]: 2025-12-04 10:50:21.1109708 +0000 UTC m=+0.559403258 container attach 8ed512c956f7feeaf4debf218704902db01e52ed0859dd6e6216323b1f47a694 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Dec  4 05:50:21 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1230: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:50:21 np0005545273 lvm[259481]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:50:21 np0005545273 lvm[259482]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:50:21 np0005545273 lvm[259481]: VG ceph_vg0 finished
Dec  4 05:50:21 np0005545273 lvm[259482]: VG ceph_vg1 finished
Dec  4 05:50:21 np0005545273 lvm[259484]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:50:21 np0005545273 lvm[259484]: VG ceph_vg2 finished
Dec  4 05:50:21 np0005545273 lvm[259486]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:50:21 np0005545273 lvm[259486]: VG ceph_vg2 finished
Dec  4 05:50:21 np0005545273 competent_hamilton[259403]: {}
Dec  4 05:50:22 np0005545273 systemd[1]: libpod-8ed512c956f7feeaf4debf218704902db01e52ed0859dd6e6216323b1f47a694.scope: Deactivated successfully.
Dec  4 05:50:22 np0005545273 systemd[1]: libpod-8ed512c956f7feeaf4debf218704902db01e52ed0859dd6e6216323b1f47a694.scope: Consumed 1.588s CPU time.
Dec  4 05:50:22 np0005545273 podman[259387]: 2025-12-04 10:50:22.034062777 +0000 UTC m=+1.482495215 container died 8ed512c956f7feeaf4debf218704902db01e52ed0859dd6e6216323b1f47a694 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_hamilton, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:50:22 np0005545273 systemd[1]: var-lib-containers-storage-overlay-ba72a9dc2d464883af7598e95b320134fba61bc0732adf96d407eac20e8695f3-merged.mount: Deactivated successfully.
Dec  4 05:50:22 np0005545273 podman[259387]: 2025-12-04 10:50:22.09150806 +0000 UTC m=+1.539940528 container remove 8ed512c956f7feeaf4debf218704902db01e52ed0859dd6e6216323b1f47a694 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  4 05:50:22 np0005545273 systemd[1]: libpod-conmon-8ed512c956f7feeaf4debf218704902db01e52ed0859dd6e6216323b1f47a694.scope: Deactivated successfully.
Dec  4 05:50:22 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:50:22 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:50:22 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:50:22 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:50:22 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "98772187-8e17-49bc-bf03-9548a140f0f9", "format": "json"}]: dispatch
Dec  4 05:50:22 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:98772187-8e17-49bc-bf03-9548a140f0f9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:50:23 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:50:23 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:50:23 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1231: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:50:25 np0005545273 podman[259524]: 2025-12-04 10:50:24.999957356 +0000 UTC m=+0.086699554 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  4 05:50:25 np0005545273 podman[259523]: 2025-12-04 10:50:25.058031036 +0000 UTC m=+0.143999295 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  4 05:50:25 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:50:25 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1232: 321 pgs: 321 active+clean; 75 MiB data, 300 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:50:26 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:98772187-8e17-49bc-bf03-9548a140f0f9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:50:26 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "98772187-8e17-49bc-bf03-9548a140f0f9", "format": "json"}]: dispatch
Dec  4 05:50:26 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:98772187-8e17-49bc-bf03-9548a140f0f9, vol_name:cephfs) < ""
Dec  4 05:50:26 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:98772187-8e17-49bc-bf03-9548a140f0f9, vol_name:cephfs) < ""
Dec  4 05:50:26 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:50:26 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:50:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:50:26
Dec  4 05:50:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:50:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:50:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'volumes', 'images', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta']
Dec  4 05:50:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:50:27 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1233: 321 pgs: 321 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 6.0 KiB/s wr, 0 op/s
Dec  4 05:50:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:50:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f8455621a30>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f84556216a0>)]
Dec  4 05:50:27 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec  4 05:50:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec  4 05:50:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:50:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:50:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:50:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:50:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:50:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:50:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:50:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:50:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:50:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:50:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:50:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f8417ad2160>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f84183c83d0>)]
Dec  4 05:50:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec  4 05:50:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec  4 05:50:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:50:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f8417ad8eb0>)]
Dec  4 05:50:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec  4 05:50:28 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "98772187-8e17-49bc-bf03-9548a140f0f9", "format": "json"}]: dispatch
Dec  4 05:50:28 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:98772187-8e17-49bc-bf03-9548a140f0f9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:50:29 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:98772187-8e17-49bc-bf03-9548a140f0f9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:50:29 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "98772187-8e17-49bc-bf03-9548a140f0f9", "force": true, "format": "json"}]: dispatch
Dec  4 05:50:29 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:98772187-8e17-49bc-bf03-9548a140f0f9, vol_name:cephfs) < ""
Dec  4 05:50:29 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/98772187-8e17-49bc-bf03-9548a140f0f9'' moved to trashcan
Dec  4 05:50:29 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:50:29 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:98772187-8e17-49bc-bf03-9548a140f0f9, vol_name:cephfs) < ""
Dec  4 05:50:29 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:50:29.295+0000 7f842649a640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:50:29 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:50:29 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:50:29.295+0000 7f842649a640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:50:29 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:50:29 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:50:29.295+0000 7f842649a640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:50:29 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:50:29 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:50:29.295+0000 7f842649a640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:50:29 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:50:29 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:50:29.295+0000 7f842649a640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:50:29 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:50:29 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:50:29.324+0000 7f8426c9b640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:50:29 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:50:29 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:50:29.324+0000 7f8426c9b640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:50:29 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:50:29 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:50:29.324+0000 7f8426c9b640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:50:29 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:50:29 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:50:29.324+0000 7f8426c9b640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:50:29 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:50:29 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:50:29.324+0000 7f8426c9b640 -1 client.0 error registering admin socket command: (17) File exists
Dec  4 05:50:29 np0005545273 ceph-mgr[75651]: client.0 error registering admin socket command: (17) File exists
Dec  4 05:50:29 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1234: 321 pgs: 321 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 6.0 KiB/s wr, 0 op/s
Dec  4 05:50:30 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:50:30 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.iwufnj(active, since 36m)
Dec  4 05:50:31 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1235: 321 pgs: 321 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 6.0 KiB/s wr, 0 op/s
Dec  4 05:50:32 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "382512d2-4ae6-4a25-96be-5898161f749d", "snap_name": "24077abd-b36a-49fd-87f6-98a6b2f3bbce_bdaceb62-36b9-4db0-b251-a4df98a35c4b", "force": true, "format": "json"}]: dispatch
Dec  4 05:50:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:24077abd-b36a-49fd-87f6-98a6b2f3bbce_bdaceb62-36b9-4db0-b251-a4df98a35c4b, sub_name:382512d2-4ae6-4a25-96be-5898161f749d, vol_name:cephfs) < ""
Dec  4 05:50:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/382512d2-4ae6-4a25-96be-5898161f749d/.meta.tmp'
Dec  4 05:50:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/382512d2-4ae6-4a25-96be-5898161f749d/.meta.tmp' to config b'/volumes/_nogroup/382512d2-4ae6-4a25-96be-5898161f749d/.meta'
Dec  4 05:50:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:24077abd-b36a-49fd-87f6-98a6b2f3bbce_bdaceb62-36b9-4db0-b251-a4df98a35c4b, sub_name:382512d2-4ae6-4a25-96be-5898161f749d, vol_name:cephfs) < ""
Dec  4 05:50:32 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "382512d2-4ae6-4a25-96be-5898161f749d", "snap_name": "24077abd-b36a-49fd-87f6-98a6b2f3bbce", "force": true, "format": "json"}]: dispatch
Dec  4 05:50:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:24077abd-b36a-49fd-87f6-98a6b2f3bbce, sub_name:382512d2-4ae6-4a25-96be-5898161f749d, vol_name:cephfs) < ""
Dec  4 05:50:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/382512d2-4ae6-4a25-96be-5898161f749d/.meta.tmp'
Dec  4 05:50:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/382512d2-4ae6-4a25-96be-5898161f749d/.meta.tmp' to config b'/volumes/_nogroup/382512d2-4ae6-4a25-96be-5898161f749d/.meta'
Dec  4 05:50:32 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:24077abd-b36a-49fd-87f6-98a6b2f3bbce, sub_name:382512d2-4ae6-4a25-96be-5898161f749d, vol_name:cephfs) < ""
Dec  4 05:50:33 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1236: 321 pgs: 321 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 30 KiB/s wr, 2 op/s
Dec  4 05:50:35 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:50:35 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "382512d2-4ae6-4a25-96be-5898161f749d", "format": "json"}]: dispatch
Dec  4 05:50:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:382512d2-4ae6-4a25-96be-5898161f749d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:50:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:382512d2-4ae6-4a25-96be-5898161f749d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:50:35 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:50:35.519+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '382512d2-4ae6-4a25-96be-5898161f749d' of type subvolume
Dec  4 05:50:35 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '382512d2-4ae6-4a25-96be-5898161f749d' of type subvolume
Dec  4 05:50:35 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "382512d2-4ae6-4a25-96be-5898161f749d", "force": true, "format": "json"}]: dispatch
Dec  4 05:50:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:382512d2-4ae6-4a25-96be-5898161f749d, vol_name:cephfs) < ""
Dec  4 05:50:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/382512d2-4ae6-4a25-96be-5898161f749d'' moved to trashcan
Dec  4 05:50:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:50:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:382512d2-4ae6-4a25-96be-5898161f749d, vol_name:cephfs) < ""
Dec  4 05:50:35 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1237: 321 pgs: 321 active+clean; 75 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 30 KiB/s wr, 2 op/s
Dec  4 05:50:36 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Dec  4 05:50:36 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Dec  4 05:50:36 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Dec  4 05:50:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:50:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:50:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:50:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:50:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:50:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:50:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:50:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:50:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:50:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:50:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006660926060230384 of space, bias 1.0, pg target 0.19982778180691152 quantized to 32 (current 32)
Dec  4 05:50:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:50:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005574210699068629 of space, bias 4.0, pg target 0.6689052838882354 quantized to 16 (current 32)
Dec  4 05:50:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:50:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Dec  4 05:50:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:50:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:50:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:50:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 05:50:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:50:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:50:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:50:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 05:50:37 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1239: 321 pgs: 321 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 61 KiB/s wr, 4 op/s
Dec  4 05:50:39 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1240: 321 pgs: 321 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 61 KiB/s wr, 4 op/s
Dec  4 05:50:40 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:50:41 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1241: 321 pgs: 321 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 61 KiB/s wr, 5 op/s
Dec  4 05:50:42 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:50:42.062 156095 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'aa:78:67', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:d2:c7:24:ee:78'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  4 05:50:42 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:50:42.063 156095 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  4 05:50:43 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1242: 321 pgs: 321 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 42 KiB/s wr, 3 op/s
Dec  4 05:50:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:50:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Dec  4 05:50:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Dec  4 05:50:45 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Dec  4 05:50:45 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1244: 321 pgs: 321 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 458 B/s rd, 47 KiB/s wr, 3 op/s
Dec  4 05:50:45 np0005545273 podman[259594]: 2025-12-04 10:50:45.945352064 +0000 UTC m=+0.054703627 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec  4 05:50:46 np0005545273 nova_compute[244644]: 2025-12-04 10:50:46.337 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:50:47 np0005545273 nova_compute[244644]: 2025-12-04 10:50:47.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:50:47 np0005545273 nova_compute[244644]: 2025-12-04 10:50:47.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  4 05:50:47 np0005545273 nova_compute[244644]: 2025-12-04 10:50:47.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  4 05:50:47 np0005545273 nova_compute[244644]: 2025-12-04 10:50:47.366 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  4 05:50:47 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1245: 321 pgs: 321 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 10 KiB/s wr, 1 op/s
Dec  4 05:50:48 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:50:48.065 156095 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=565580d5-3422-4e11-b563-3f1a3db67238, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  4 05:50:49 np0005545273 nova_compute[244644]: 2025-12-04 10:50:49.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:50:49 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1246: 321 pgs: 321 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 10 KiB/s wr, 1 op/s
Dec  4 05:50:50 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:50:50 np0005545273 nova_compute[244644]: 2025-12-04 10:50:50.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:50:50 np0005545273 nova_compute[244644]: 2025-12-04 10:50:50.447 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:50:50 np0005545273 nova_compute[244644]: 2025-12-04 10:50:50.447 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:50:50 np0005545273 nova_compute[244644]: 2025-12-04 10:50:50.447 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:50:50 np0005545273 nova_compute[244644]: 2025-12-04 10:50:50.448 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  4 05:50:50 np0005545273 nova_compute[244644]: 2025-12-04 10:50:50.448 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:50:50 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "48e0e8d9-0ebb-4db4-a173-73e6b17560ed", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:50:50 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:48e0e8d9-0ebb-4db4-a173-73e6b17560ed, vol_name:cephfs) < ""
Dec  4 05:50:50 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/48e0e8d9-0ebb-4db4-a173-73e6b17560ed/67c71b68-b799-452e-b991-191544991adf'.
Dec  4 05:50:50 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/48e0e8d9-0ebb-4db4-a173-73e6b17560ed/.meta.tmp'
Dec  4 05:50:50 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/48e0e8d9-0ebb-4db4-a173-73e6b17560ed/.meta.tmp' to config b'/volumes/_nogroup/48e0e8d9-0ebb-4db4-a173-73e6b17560ed/.meta'
Dec  4 05:50:50 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:48e0e8d9-0ebb-4db4-a173-73e6b17560ed, vol_name:cephfs) < ""
Dec  4 05:50:50 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "48e0e8d9-0ebb-4db4-a173-73e6b17560ed", "format": "json"}]: dispatch
Dec  4 05:50:50 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:48e0e8d9-0ebb-4db4-a173-73e6b17560ed, vol_name:cephfs) < ""
Dec  4 05:50:50 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:48e0e8d9-0ebb-4db4-a173-73e6b17560ed, vol_name:cephfs) < ""
Dec  4 05:50:50 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:50:50 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:50:50 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:50:50 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/194033497' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:50:51 np0005545273 nova_compute[244644]: 2025-12-04 10:50:51.002 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:50:51 np0005545273 nova_compute[244644]: 2025-12-04 10:50:51.139 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  4 05:50:51 np0005545273 nova_compute[244644]: 2025-12-04 10:50:51.141 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5020MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  4 05:50:51 np0005545273 nova_compute[244644]: 2025-12-04 10:50:51.141 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:50:51 np0005545273 nova_compute[244644]: 2025-12-04 10:50:51.141 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:50:51 np0005545273 nova_compute[244644]: 2025-12-04 10:50:51.226 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  4 05:50:51 np0005545273 nova_compute[244644]: 2025-12-04 10:50:51.227 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  4 05:50:51 np0005545273 nova_compute[244644]: 2025-12-04 10:50:51.244 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:50:51 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1247: 321 pgs: 321 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 9.7 KiB/s wr, 0 op/s
Dec  4 05:50:51 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:50:51 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/258707864' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:50:51 np0005545273 nova_compute[244644]: 2025-12-04 10:50:51.814 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:50:51 np0005545273 nova_compute[244644]: 2025-12-04 10:50:51.821 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  4 05:50:51 np0005545273 nova_compute[244644]: 2025-12-04 10:50:51.840 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  4 05:50:51 np0005545273 nova_compute[244644]: 2025-12-04 10:50:51.842 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  4 05:50:51 np0005545273 nova_compute[244644]: 2025-12-04 10:50:51.842 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:50:53 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1248: 321 pgs: 321 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s wr, 0 op/s
Dec  4 05:50:53 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "48e0e8d9-0ebb-4db4-a173-73e6b17560ed", "snap_name": "2c9f33a3-8987-4579-986d-04d3f23eb0e2", "format": "json"}]: dispatch
Dec  4 05:50:53 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:2c9f33a3-8987-4579-986d-04d3f23eb0e2, sub_name:48e0e8d9-0ebb-4db4-a173-73e6b17560ed, vol_name:cephfs) < ""
Dec  4 05:50:53 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:2c9f33a3-8987-4579-986d-04d3f23eb0e2, sub_name:48e0e8d9-0ebb-4db4-a173-73e6b17560ed, vol_name:cephfs) < ""
Dec  4 05:50:53 np0005545273 nova_compute[244644]: 2025-12-04 10:50:53.843 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:50:53 np0005545273 nova_compute[244644]: 2025-12-04 10:50:53.843 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  4 05:50:54 np0005545273 nova_compute[244644]: 2025-12-04 10:50:54.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:50:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:50:54.919 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:50:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:50:54.920 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:50:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:50:54.920 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:50:55 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:50:55 np0005545273 nova_compute[244644]: 2025-12-04 10:50:55.333 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:50:55 np0005545273 nova_compute[244644]: 2025-12-04 10:50:55.337 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:50:55 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1249: 321 pgs: 321 active+clean; 76 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s wr, 0 op/s
Dec  4 05:50:55 np0005545273 podman[259662]: 2025-12-04 10:50:55.944128568 +0000 UTC m=+0.047111540 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  4 05:50:55 np0005545273 podman[259661]: 2025-12-04 10:50:55.997131983 +0000 UTC m=+0.095785069 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  4 05:50:57 np0005545273 nova_compute[244644]: 2025-12-04 10:50:57.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:50:57 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1250: 321 pgs: 321 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s wr, 1 op/s
Dec  4 05:50:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:50:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:50:58 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9dada9dc-6e1e-4a21-96e0-c09b80328b04", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:50:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9dada9dc-6e1e-4a21-96e0-c09b80328b04, vol_name:cephfs) < ""
Dec  4 05:50:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:50:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:50:58 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/9dada9dc-6e1e-4a21-96e0-c09b80328b04/2f8aed1d-7200-4215-967a-dbcd84383a27'.
Dec  4 05:50:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9dada9dc-6e1e-4a21-96e0-c09b80328b04/.meta.tmp'
Dec  4 05:50:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9dada9dc-6e1e-4a21-96e0-c09b80328b04/.meta.tmp' to config b'/volumes/_nogroup/9dada9dc-6e1e-4a21-96e0-c09b80328b04/.meta'
Dec  4 05:50:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9dada9dc-6e1e-4a21-96e0-c09b80328b04, vol_name:cephfs) < ""
Dec  4 05:50:58 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9dada9dc-6e1e-4a21-96e0-c09b80328b04", "format": "json"}]: dispatch
Dec  4 05:50:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9dada9dc-6e1e-4a21-96e0-c09b80328b04, vol_name:cephfs) < ""
Dec  4 05:50:58 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9dada9dc-6e1e-4a21-96e0-c09b80328b04, vol_name:cephfs) < ""
Dec  4 05:50:58 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:50:58 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:50:59 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:50:59 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:50:59 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1251: 321 pgs: 321 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s wr, 1 op/s
Dec  4 05:51:00 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:51:01 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1252: 321 pgs: 321 active+clean; 76 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s wr, 1 op/s
Dec  4 05:51:01 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9dada9dc-6e1e-4a21-96e0-c09b80328b04", "format": "json"}]: dispatch
Dec  4 05:51:01 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9dada9dc-6e1e-4a21-96e0-c09b80328b04, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:51:01 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9dada9dc-6e1e-4a21-96e0-c09b80328b04, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:51:01 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:51:01.961+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9dada9dc-6e1e-4a21-96e0-c09b80328b04' of type subvolume
Dec  4 05:51:01 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9dada9dc-6e1e-4a21-96e0-c09b80328b04' of type subvolume
Dec  4 05:51:01 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9dada9dc-6e1e-4a21-96e0-c09b80328b04", "force": true, "format": "json"}]: dispatch
Dec  4 05:51:01 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9dada9dc-6e1e-4a21-96e0-c09b80328b04, vol_name:cephfs) < ""
Dec  4 05:51:01 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9dada9dc-6e1e-4a21-96e0-c09b80328b04'' moved to trashcan
Dec  4 05:51:01 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:51:01 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9dada9dc-6e1e-4a21-96e0-c09b80328b04, vol_name:cephfs) < ""
Dec  4 05:51:03 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1253: 321 pgs: 321 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s wr, 3 op/s
Dec  4 05:51:05 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:51:05 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1254: 321 pgs: 321 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s wr, 2 op/s
Dec  4 05:51:06 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ac41ff8b-3e5d-413c-842a-4731aa5fec9c", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:51:06 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:ac41ff8b-3e5d-413c-842a-4731aa5fec9c, vol_name:cephfs) < ""
Dec  4 05:51:06 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/ac41ff8b-3e5d-413c-842a-4731aa5fec9c/3a207827-d5fd-419f-acaa-6c76538172dc'.
Dec  4 05:51:06 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ac41ff8b-3e5d-413c-842a-4731aa5fec9c/.meta.tmp'
Dec  4 05:51:06 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ac41ff8b-3e5d-413c-842a-4731aa5fec9c/.meta.tmp' to config b'/volumes/_nogroup/ac41ff8b-3e5d-413c-842a-4731aa5fec9c/.meta'
Dec  4 05:51:06 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:ac41ff8b-3e5d-413c-842a-4731aa5fec9c, vol_name:cephfs) < ""
Dec  4 05:51:06 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ac41ff8b-3e5d-413c-842a-4731aa5fec9c", "format": "json"}]: dispatch
Dec  4 05:51:06 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ac41ff8b-3e5d-413c-842a-4731aa5fec9c, vol_name:cephfs) < ""
Dec  4 05:51:06 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ac41ff8b-3e5d-413c-842a-4731aa5fec9c, vol_name:cephfs) < ""
Dec  4 05:51:06 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:51:06 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:51:07 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1255: 321 pgs: 321 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 73 KiB/s wr, 4 op/s
Dec  4 05:51:07 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Dec  4 05:51:07 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:51:07.717133) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  4 05:51:07 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Dec  4 05:51:07 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845467717251, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2407, "num_deletes": 506, "total_data_size": 3753390, "memory_usage": 3847280, "flush_reason": "Manual Compaction"}
Dec  4 05:51:07 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Dec  4 05:51:07 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845467744241, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3419157, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26255, "largest_seqno": 28661, "table_properties": {"data_size": 3409047, "index_size": 5900, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3141, "raw_key_size": 25408, "raw_average_key_size": 20, "raw_value_size": 3386233, "raw_average_value_size": 2715, "num_data_blocks": 262, "num_entries": 1247, "num_filter_entries": 1247, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764845262, "oldest_key_time": 1764845262, "file_creation_time": 1764845467, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:51:07 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 27188 microseconds, and 10275 cpu microseconds.
Dec  4 05:51:07 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  4 05:51:07 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:51:07.744330) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3419157 bytes OK
Dec  4 05:51:07 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:51:07.744366) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Dec  4 05:51:07 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:51:07.746772) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Dec  4 05:51:07 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:51:07.746794) EVENT_LOG_v1 {"time_micros": 1764845467746788, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  4 05:51:07 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:51:07.746815) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  4 05:51:07 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3742155, prev total WAL file size 3742155, number of live WAL files 2.
Dec  4 05:51:07 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:51:07 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:51:07.748150) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Dec  4 05:51:07 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  4 05:51:07 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3339KB)], [59(9740KB)]
Dec  4 05:51:07 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845467748212, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 13393257, "oldest_snapshot_seqno": -1}
Dec  4 05:51:07 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5874 keys, 8850047 bytes, temperature: kUnknown
Dec  4 05:51:07 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845467804934, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 8850047, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8810748, "index_size": 23509, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14725, "raw_key_size": 146717, "raw_average_key_size": 24, "raw_value_size": 8705403, "raw_average_value_size": 1482, "num_data_blocks": 965, "num_entries": 5874, "num_filter_entries": 5874, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 1764845467, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:51:07 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  4 05:51:07 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:51:07.805262) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 8850047 bytes
Dec  4 05:51:07 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:51:07.807214) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 235.7 rd, 155.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 9.5 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(6.5) write-amplify(2.6) OK, records in: 6885, records dropped: 1011 output_compression: NoCompression
Dec  4 05:51:07 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:51:07.807262) EVENT_LOG_v1 {"time_micros": 1764845467807244, "job": 32, "event": "compaction_finished", "compaction_time_micros": 56815, "compaction_time_cpu_micros": 21528, "output_level": 6, "num_output_files": 1, "total_output_size": 8850047, "num_input_records": 6885, "num_output_records": 5874, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  4 05:51:07 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:51:07 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845467808362, "job": 32, "event": "table_file_deletion", "file_number": 61}
Dec  4 05:51:07 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:51:07 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845467810523, "job": 32, "event": "table_file_deletion", "file_number": 59}
Dec  4 05:51:07 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:51:07.748065) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:51:07 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:51:07.810597) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:51:07 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:51:07.810602) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:51:07 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:51:07.810604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:51:07 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:51:07.810606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:51:07 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:51:07.810609) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:51:09 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ac41ff8b-3e5d-413c-842a-4731aa5fec9c", "format": "json"}]: dispatch
Dec  4 05:51:09 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ac41ff8b-3e5d-413c-842a-4731aa5fec9c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:51:09 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ac41ff8b-3e5d-413c-842a-4731aa5fec9c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:51:09 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:51:09.627+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ac41ff8b-3e5d-413c-842a-4731aa5fec9c' of type subvolume
Dec  4 05:51:09 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ac41ff8b-3e5d-413c-842a-4731aa5fec9c' of type subvolume
Dec  4 05:51:09 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ac41ff8b-3e5d-413c-842a-4731aa5fec9c", "force": true, "format": "json"}]: dispatch
Dec  4 05:51:09 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ac41ff8b-3e5d-413c-842a-4731aa5fec9c, vol_name:cephfs) < ""
Dec  4 05:51:09 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ac41ff8b-3e5d-413c-842a-4731aa5fec9c'' moved to trashcan
Dec  4 05:51:09 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:51:09 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ac41ff8b-3e5d-413c-842a-4731aa5fec9c, vol_name:cephfs) < ""
Dec  4 05:51:09 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1256: 321 pgs: 321 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 53 KiB/s wr, 3 op/s
Dec  4 05:51:10 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:51:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  4 05:51:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4251661480' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec  4 05:51:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  4 05:51:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4251661480' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec  4 05:51:11 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1257: 321 pgs: 321 active+clean; 77 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 53 KiB/s wr, 3 op/s
Dec  4 05:51:13 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1a25a2b9-950c-410a-9ad7-d3f8bbfb3687", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:51:13 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1a25a2b9-950c-410a-9ad7-d3f8bbfb3687, vol_name:cephfs) < ""
Dec  4 05:51:13 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/1a25a2b9-950c-410a-9ad7-d3f8bbfb3687/d0d02848-9da1-4624-b31a-63cb7ff261f4'.
Dec  4 05:51:13 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1a25a2b9-950c-410a-9ad7-d3f8bbfb3687/.meta.tmp'
Dec  4 05:51:13 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1a25a2b9-950c-410a-9ad7-d3f8bbfb3687/.meta.tmp' to config b'/volumes/_nogroup/1a25a2b9-950c-410a-9ad7-d3f8bbfb3687/.meta'
Dec  4 05:51:13 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1a25a2b9-950c-410a-9ad7-d3f8bbfb3687, vol_name:cephfs) < ""
Dec  4 05:51:13 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1a25a2b9-950c-410a-9ad7-d3f8bbfb3687", "format": "json"}]: dispatch
Dec  4 05:51:13 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1a25a2b9-950c-410a-9ad7-d3f8bbfb3687, vol_name:cephfs) < ""
Dec  4 05:51:13 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1a25a2b9-950c-410a-9ad7-d3f8bbfb3687, vol_name:cephfs) < ""
Dec  4 05:51:13 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:51:13 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:51:13 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1258: 321 pgs: 321 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 77 KiB/s wr, 5 op/s
Dec  4 05:51:15 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:51:15 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1259: 321 pgs: 321 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 44 KiB/s wr, 3 op/s
Dec  4 05:51:16 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1a25a2b9-950c-410a-9ad7-d3f8bbfb3687", "format": "json"}]: dispatch
Dec  4 05:51:16 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1a25a2b9-950c-410a-9ad7-d3f8bbfb3687, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:51:16 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1a25a2b9-950c-410a-9ad7-d3f8bbfb3687, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:51:16 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:51:16.397+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1a25a2b9-950c-410a-9ad7-d3f8bbfb3687' of type subvolume
Dec  4 05:51:16 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1a25a2b9-950c-410a-9ad7-d3f8bbfb3687' of type subvolume
Dec  4 05:51:16 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1a25a2b9-950c-410a-9ad7-d3f8bbfb3687", "force": true, "format": "json"}]: dispatch
Dec  4 05:51:16 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1a25a2b9-950c-410a-9ad7-d3f8bbfb3687, vol_name:cephfs) < ""
Dec  4 05:51:16 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1a25a2b9-950c-410a-9ad7-d3f8bbfb3687'' moved to trashcan
Dec  4 05:51:16 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:51:16 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1a25a2b9-950c-410a-9ad7-d3f8bbfb3687, vol_name:cephfs) < ""
Dec  4 05:51:16 np0005545273 podman[259706]: 2025-12-04 10:51:16.948327607 +0000 UTC m=+0.056809429 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec  4 05:51:17 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1260: 321 pgs: 321 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 74 KiB/s wr, 5 op/s
Dec  4 05:51:19 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1261: 321 pgs: 321 active+clean; 77 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 54 KiB/s wr, 3 op/s
Dec  4 05:51:20 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "22efd76f-f190-4877-9402-6f240297ffab", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:51:20 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:22efd76f-f190-4877-9402-6f240297ffab, vol_name:cephfs) < ""
Dec  4 05:51:20 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/22efd76f-f190-4877-9402-6f240297ffab/967bbf13-d2e7-4e83-a18f-dbf7bce7d877'.
Dec  4 05:51:20 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/22efd76f-f190-4877-9402-6f240297ffab/.meta.tmp'
Dec  4 05:51:20 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/22efd76f-f190-4877-9402-6f240297ffab/.meta.tmp' to config b'/volumes/_nogroup/22efd76f-f190-4877-9402-6f240297ffab/.meta'
Dec  4 05:51:20 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:22efd76f-f190-4877-9402-6f240297ffab, vol_name:cephfs) < ""
Dec  4 05:51:20 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "22efd76f-f190-4877-9402-6f240297ffab", "format": "json"}]: dispatch
Dec  4 05:51:20 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:22efd76f-f190-4877-9402-6f240297ffab, vol_name:cephfs) < ""
Dec  4 05:51:20 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:22efd76f-f190-4877-9402-6f240297ffab, vol_name:cephfs) < ""
Dec  4 05:51:20 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:51:20 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:51:20 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:51:21 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1262: 321 pgs: 321 active+clean; 78 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 77 KiB/s wr, 4 op/s
Dec  4 05:51:23 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:51:23 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:51:23 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:51:23 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:51:23 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:51:23 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:51:23 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "22efd76f-f190-4877-9402-6f240297ffab", "format": "json"}]: dispatch
Dec  4 05:51:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:22efd76f-f190-4877-9402-6f240297ffab, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:51:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:22efd76f-f190-4877-9402-6f240297ffab, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:51:23 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:51:23.493+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '22efd76f-f190-4877-9402-6f240297ffab' of type subvolume
Dec  4 05:51:23 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '22efd76f-f190-4877-9402-6f240297ffab' of type subvolume
Dec  4 05:51:23 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "22efd76f-f190-4877-9402-6f240297ffab", "force": true, "format": "json"}]: dispatch
Dec  4 05:51:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:22efd76f-f190-4877-9402-6f240297ffab, vol_name:cephfs) < ""
Dec  4 05:51:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/22efd76f-f190-4877-9402-6f240297ffab'' moved to trashcan
Dec  4 05:51:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:51:23 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:22efd76f-f190-4877-9402-6f240297ffab, vol_name:cephfs) < ""
Dec  4 05:51:23 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1263: 321 pgs: 321 active+clean; 78 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 77 KiB/s wr, 5 op/s
Dec  4 05:51:23 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:51:23 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:51:23 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:51:23 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:51:23 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:51:23 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:51:24 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:51:24 np0005545273 podman[259870]: 2025-12-04 10:51:24.318302317 +0000 UTC m=+0.024634698 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:51:24 np0005545273 podman[259870]: 2025-12-04 10:51:24.609272837 +0000 UTC m=+0.315605198 container create 9a2113cbbfd43724f58103d88dd1474e441b135f07e8415a8491e1d910fe43eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_kare, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:51:24 np0005545273 systemd[1]: Started libpod-conmon-9a2113cbbfd43724f58103d88dd1474e441b135f07e8415a8491e1d910fe43eb.scope.
Dec  4 05:51:24 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:51:24 np0005545273 podman[259870]: 2025-12-04 10:51:24.725800135 +0000 UTC m=+0.432132516 container init 9a2113cbbfd43724f58103d88dd1474e441b135f07e8415a8491e1d910fe43eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_kare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:51:24 np0005545273 podman[259870]: 2025-12-04 10:51:24.73575683 +0000 UTC m=+0.442089191 container start 9a2113cbbfd43724f58103d88dd1474e441b135f07e8415a8491e1d910fe43eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_kare, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:51:24 np0005545273 podman[259870]: 2025-12-04 10:51:24.740134077 +0000 UTC m=+0.446466458 container attach 9a2113cbbfd43724f58103d88dd1474e441b135f07e8415a8491e1d910fe43eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:51:24 np0005545273 recursing_kare[259886]: 167 167
Dec  4 05:51:24 np0005545273 systemd[1]: libpod-9a2113cbbfd43724f58103d88dd1474e441b135f07e8415a8491e1d910fe43eb.scope: Deactivated successfully.
Dec  4 05:51:24 np0005545273 podman[259870]: 2025-12-04 10:51:24.744882375 +0000 UTC m=+0.451214736 container died 9a2113cbbfd43724f58103d88dd1474e441b135f07e8415a8491e1d910fe43eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec  4 05:51:24 np0005545273 systemd[1]: var-lib-containers-storage-overlay-c6fa28a68fe25802bfd8eccef5798615e60395c4f50fbd0c7a816cf9adc2530b-merged.mount: Deactivated successfully.
Dec  4 05:51:24 np0005545273 podman[259870]: 2025-12-04 10:51:24.867497472 +0000 UTC m=+0.573829833 container remove 9a2113cbbfd43724f58103d88dd1474e441b135f07e8415a8491e1d910fe43eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_kare, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  4 05:51:24 np0005545273 systemd[1]: libpod-conmon-9a2113cbbfd43724f58103d88dd1474e441b135f07e8415a8491e1d910fe43eb.scope: Deactivated successfully.
Dec  4 05:51:25 np0005545273 podman[259911]: 2025-12-04 10:51:25.135258812 +0000 UTC m=+0.085283430 container create a4e4a348d667fe4cd301bbd72af304e8141c1a15bcd94bfa9b4f3ef7cac37370 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_mayer, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:51:25 np0005545273 podman[259911]: 2025-12-04 10:51:25.080704459 +0000 UTC m=+0.030729127 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:51:25 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:51:25 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:51:25 np0005545273 systemd[1]: Started libpod-conmon-a4e4a348d667fe4cd301bbd72af304e8141c1a15bcd94bfa9b4f3ef7cac37370.scope.
Dec  4 05:51:25 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:51:25 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/613e9475016190171747413ff103c694e92a3a728b6cafe42000727fe544d34a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:51:25 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/613e9475016190171747413ff103c694e92a3a728b6cafe42000727fe544d34a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:51:25 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/613e9475016190171747413ff103c694e92a3a728b6cafe42000727fe544d34a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:51:25 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/613e9475016190171747413ff103c694e92a3a728b6cafe42000727fe544d34a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:51:25 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/613e9475016190171747413ff103c694e92a3a728b6cafe42000727fe544d34a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:51:25 np0005545273 podman[259911]: 2025-12-04 10:51:25.219058184 +0000 UTC m=+0.169082822 container init a4e4a348d667fe4cd301bbd72af304e8141c1a15bcd94bfa9b4f3ef7cac37370 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec  4 05:51:25 np0005545273 podman[259911]: 2025-12-04 10:51:25.229540632 +0000 UTC m=+0.179565250 container start a4e4a348d667fe4cd301bbd72af304e8141c1a15bcd94bfa9b4f3ef7cac37370 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_mayer, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:51:25 np0005545273 podman[259911]: 2025-12-04 10:51:25.249466622 +0000 UTC m=+0.199491250 container attach a4e4a348d667fe4cd301bbd72af304e8141c1a15bcd94bfa9b4f3ef7cac37370 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Dec  4 05:51:25 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:51:25 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1264: 321 pgs: 321 active+clean; 78 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 53 KiB/s wr, 3 op/s
Dec  4 05:51:25 np0005545273 confident_mayer[259928]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:51:25 np0005545273 confident_mayer[259928]: --> All data devices are unavailable
Dec  4 05:51:25 np0005545273 systemd[1]: libpod-a4e4a348d667fe4cd301bbd72af304e8141c1a15bcd94bfa9b4f3ef7cac37370.scope: Deactivated successfully.
Dec  4 05:51:25 np0005545273 podman[259911]: 2025-12-04 10:51:25.73863797 +0000 UTC m=+0.688662598 container died a4e4a348d667fe4cd301bbd72af304e8141c1a15bcd94bfa9b4f3ef7cac37370 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_mayer, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True)
Dec  4 05:51:25 np0005545273 systemd[1]: var-lib-containers-storage-overlay-613e9475016190171747413ff103c694e92a3a728b6cafe42000727fe544d34a-merged.mount: Deactivated successfully.
Dec  4 05:51:25 np0005545273 podman[259911]: 2025-12-04 10:51:25.789229876 +0000 UTC m=+0.739254494 container remove a4e4a348d667fe4cd301bbd72af304e8141c1a15bcd94bfa9b4f3ef7cac37370 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:51:25 np0005545273 systemd[1]: libpod-conmon-a4e4a348d667fe4cd301bbd72af304e8141c1a15bcd94bfa9b4f3ef7cac37370.scope: Deactivated successfully.
Dec  4 05:51:26 np0005545273 podman[260009]: 2025-12-04 10:51:26.098075686 +0000 UTC m=+0.064776105 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  4 05:51:26 np0005545273 podman[260010]: 2025-12-04 10:51:26.12749539 +0000 UTC m=+0.095612104 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  4 05:51:26 np0005545273 podman[260064]: 2025-12-04 10:51:26.333246833 +0000 UTC m=+0.045721155 container create 80fb82eb627d9c69a8c0b274645c575f2572f3784c657344b20337992c672c5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  4 05:51:26 np0005545273 systemd[1]: Started libpod-conmon-80fb82eb627d9c69a8c0b274645c575f2572f3784c657344b20337992c672c5b.scope.
Dec  4 05:51:26 np0005545273 podman[260064]: 2025-12-04 10:51:26.313638431 +0000 UTC m=+0.026112763 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:51:26 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:51:26 np0005545273 podman[260064]: 2025-12-04 10:51:26.429214946 +0000 UTC m=+0.141689358 container init 80fb82eb627d9c69a8c0b274645c575f2572f3784c657344b20337992c672c5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:51:26 np0005545273 podman[260064]: 2025-12-04 10:51:26.437659613 +0000 UTC m=+0.150133925 container start 80fb82eb627d9c69a8c0b274645c575f2572f3784c657344b20337992c672c5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_goldstine, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:51:26 np0005545273 podman[260064]: 2025-12-04 10:51:26.443085937 +0000 UTC m=+0.155560279 container attach 80fb82eb627d9c69a8c0b274645c575f2572f3784c657344b20337992c672c5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_goldstine, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:51:26 np0005545273 adoring_goldstine[260080]: 167 167
Dec  4 05:51:26 np0005545273 systemd[1]: libpod-80fb82eb627d9c69a8c0b274645c575f2572f3784c657344b20337992c672c5b.scope: Deactivated successfully.
Dec  4 05:51:26 np0005545273 podman[260064]: 2025-12-04 10:51:26.445623849 +0000 UTC m=+0.158098161 container died 80fb82eb627d9c69a8c0b274645c575f2572f3784c657344b20337992c672c5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_goldstine, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:51:26 np0005545273 systemd[1]: var-lib-containers-storage-overlay-4b4a4ac5bf70162cd9d2e51646acbb27b6447d365463eb8d9c29bb46deceda39-merged.mount: Deactivated successfully.
Dec  4 05:51:26 np0005545273 podman[260064]: 2025-12-04 10:51:26.494901182 +0000 UTC m=+0.207375494 container remove 80fb82eb627d9c69a8c0b274645c575f2572f3784c657344b20337992c672c5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:51:26 np0005545273 systemd[1]: libpod-conmon-80fb82eb627d9c69a8c0b274645c575f2572f3784c657344b20337992c672c5b.scope: Deactivated successfully.
Dec  4 05:51:26 np0005545273 podman[260102]: 2025-12-04 10:51:26.670917813 +0000 UTC m=+0.053437986 container create 4a7d3c69d22610e168f787e0e5006d7c6e4ba513218ad82d6b3f2a11819e6980 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_jackson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec  4 05:51:26 np0005545273 systemd[1]: Started libpod-conmon-4a7d3c69d22610e168f787e0e5006d7c6e4ba513218ad82d6b3f2a11819e6980.scope.
Dec  4 05:51:26 np0005545273 podman[260102]: 2025-12-04 10:51:26.645568249 +0000 UTC m=+0.028088422 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:51:26 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:51:26 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7e38f00c362b7ebda5264cae2d88cb27feb3daf255485009c90538a74f60703/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:51:26 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7e38f00c362b7ebda5264cae2d88cb27feb3daf255485009c90538a74f60703/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:51:26 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7e38f00c362b7ebda5264cae2d88cb27feb3daf255485009c90538a74f60703/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:51:26 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7e38f00c362b7ebda5264cae2d88cb27feb3daf255485009c90538a74f60703/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:51:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:51:26
Dec  4 05:51:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:51:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:51:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'default.rgw.log', 'backups', 'images', 'vms', '.rgw.root']
Dec  4 05:51:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:51:27 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "772141e6-25b5-4706-b9c3-ba13ee143838", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec  4 05:51:27 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:772141e6-25b5-4706-b9c3-ba13ee143838, vol_name:cephfs) < ""
Dec  4 05:51:27 np0005545273 podman[260102]: 2025-12-04 10:51:27.552734335 +0000 UTC m=+0.935254528 container init 4a7d3c69d22610e168f787e0e5006d7c6e4ba513218ad82d6b3f2a11819e6980 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_jackson, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  4 05:51:27 np0005545273 podman[260102]: 2025-12-04 10:51:27.568866392 +0000 UTC m=+0.951386545 container start 4a7d3c69d22610e168f787e0e5006d7c6e4ba513218ad82d6b3f2a11819e6980 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_jackson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:51:27 np0005545273 podman[260102]: 2025-12-04 10:51:27.575133166 +0000 UTC m=+0.957653329 container attach 4a7d3c69d22610e168f787e0e5006d7c6e4ba513218ad82d6b3f2a11819e6980 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec  4 05:51:27 np0005545273 ceph-mgr[75651]: [volumes INFO ceph.fs.earmarking] Earmark '' set on b'/volumes/_nogroup/772141e6-25b5-4706-b9c3-ba13ee143838/e384d99e-5dac-4b84-8f41-b08a2fb8f434'.
Dec  4 05:51:27 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/772141e6-25b5-4706-b9c3-ba13ee143838/.meta.tmp'
Dec  4 05:51:27 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/772141e6-25b5-4706-b9c3-ba13ee143838/.meta.tmp' to config b'/volumes/_nogroup/772141e6-25b5-4706-b9c3-ba13ee143838/.meta'
Dec  4 05:51:27 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:772141e6-25b5-4706-b9c3-ba13ee143838, vol_name:cephfs) < ""
Dec  4 05:51:27 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "772141e6-25b5-4706-b9c3-ba13ee143838", "format": "json"}]: dispatch
Dec  4 05:51:27 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:772141e6-25b5-4706-b9c3-ba13ee143838, vol_name:cephfs) < ""
Dec  4 05:51:27 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:772141e6-25b5-4706-b9c3-ba13ee143838, vol_name:cephfs) < ""
Dec  4 05:51:27 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  4 05:51:27 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1266659978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  4 05:51:27 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1265: 321 pgs: 321 active+clean; 78 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 86 KiB/s wr, 5 op/s
Dec  4 05:51:27 np0005545273 bold_jackson[260119]: {
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:    "0": [
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:        {
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            "devices": [
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "/dev/loop3"
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            ],
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            "lv_name": "ceph_lv0",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            "lv_size": "21470642176",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            "name": "ceph_lv0",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            "tags": {
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.cluster_name": "ceph",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.crush_device_class": "",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.encrypted": "0",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.objectstore": "bluestore",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.osd_id": "0",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.type": "block",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.vdo": "0",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.with_tpm": "0"
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            },
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            "type": "block",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            "vg_name": "ceph_vg0"
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:        }
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:    ],
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:    "1": [
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:        {
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            "devices": [
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "/dev/loop4"
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            ],
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            "lv_name": "ceph_lv1",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            "lv_size": "21470642176",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            "name": "ceph_lv1",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            "tags": {
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.cluster_name": "ceph",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.crush_device_class": "",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.encrypted": "0",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.objectstore": "bluestore",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.osd_id": "1",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.type": "block",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.vdo": "0",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.with_tpm": "0"
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            },
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            "type": "block",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            "vg_name": "ceph_vg1"
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:        }
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:    ],
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:    "2": [
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:        {
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            "devices": [
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "/dev/loop5"
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            ],
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            "lv_name": "ceph_lv2",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            "lv_size": "21470642176",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            "name": "ceph_lv2",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            "tags": {
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.cluster_name": "ceph",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.crush_device_class": "",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.encrypted": "0",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.objectstore": "bluestore",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.osd_id": "2",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.type": "block",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.vdo": "0",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:                "ceph.with_tpm": "0"
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            },
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            "type": "block",
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:            "vg_name": "ceph_vg2"
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:        }
Dec  4 05:51:27 np0005545273 bold_jackson[260119]:    ]
Dec  4 05:51:27 np0005545273 bold_jackson[260119]: }
Dec  4 05:51:27 np0005545273 systemd[1]: libpod-4a7d3c69d22610e168f787e0e5006d7c6e4ba513218ad82d6b3f2a11819e6980.scope: Deactivated successfully.
Dec  4 05:51:27 np0005545273 podman[260102]: 2025-12-04 10:51:27.934800178 +0000 UTC m=+1.317320361 container died 4a7d3c69d22610e168f787e0e5006d7c6e4ba513218ad82d6b3f2a11819e6980 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_jackson, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:51:27 np0005545273 systemd[1]: var-lib-containers-storage-overlay-b7e38f00c362b7ebda5264cae2d88cb27feb3daf255485009c90538a74f60703-merged.mount: Deactivated successfully.
Dec  4 05:51:27 np0005545273 podman[260102]: 2025-12-04 10:51:27.991558325 +0000 UTC m=+1.374078488 container remove 4a7d3c69d22610e168f787e0e5006d7c6e4ba513218ad82d6b3f2a11819e6980 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_jackson, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:51:27 np0005545273 systemd[1]: libpod-conmon-4a7d3c69d22610e168f787e0e5006d7c6e4ba513218ad82d6b3f2a11819e6980.scope: Deactivated successfully.
Dec  4 05:51:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:51:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:51:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:51:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:51:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:51:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:51:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:51:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:51:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:51:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:51:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:51:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:51:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:51:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:51:28 np0005545273 podman[260203]: 2025-12-04 10:51:28.514798441 +0000 UTC m=+0.040646891 container create 2719d82cf4fb3805df5e80c4bf867f8c5e55d813b74fe10334fc2ce4e408c09a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_raman, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:51:28 np0005545273 systemd[1]: Started libpod-conmon-2719d82cf4fb3805df5e80c4bf867f8c5e55d813b74fe10334fc2ce4e408c09a.scope.
Dec  4 05:51:28 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:51:28 np0005545273 podman[260203]: 2025-12-04 10:51:28.592646057 +0000 UTC m=+0.118494527 container init 2719d82cf4fb3805df5e80c4bf867f8c5e55d813b74fe10334fc2ce4e408c09a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec  4 05:51:28 np0005545273 podman[260203]: 2025-12-04 10:51:28.497613178 +0000 UTC m=+0.023461648 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:51:28 np0005545273 podman[260203]: 2025-12-04 10:51:28.601073554 +0000 UTC m=+0.126922004 container start 2719d82cf4fb3805df5e80c4bf867f8c5e55d813b74fe10334fc2ce4e408c09a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  4 05:51:28 np0005545273 podman[260203]: 2025-12-04 10:51:28.605114004 +0000 UTC m=+0.130962474 container attach 2719d82cf4fb3805df5e80c4bf867f8c5e55d813b74fe10334fc2ce4e408c09a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_raman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:51:28 np0005545273 silly_raman[260220]: 167 167
Dec  4 05:51:28 np0005545273 systemd[1]: libpod-2719d82cf4fb3805df5e80c4bf867f8c5e55d813b74fe10334fc2ce4e408c09a.scope: Deactivated successfully.
Dec  4 05:51:28 np0005545273 podman[260203]: 2025-12-04 10:51:28.609705307 +0000 UTC m=+0.135553757 container died 2719d82cf4fb3805df5e80c4bf867f8c5e55d813b74fe10334fc2ce4e408c09a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_raman, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:51:28 np0005545273 systemd[1]: var-lib-containers-storage-overlay-93c5362a74b16beb4f814dc2218142bf5455763575880cfbbccac13d92e9118b-merged.mount: Deactivated successfully.
Dec  4 05:51:28 np0005545273 podman[260203]: 2025-12-04 10:51:28.658551149 +0000 UTC m=+0.184399599 container remove 2719d82cf4fb3805df5e80c4bf867f8c5e55d813b74fe10334fc2ce4e408c09a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_raman, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec  4 05:51:28 np0005545273 systemd[1]: libpod-conmon-2719d82cf4fb3805df5e80c4bf867f8c5e55d813b74fe10334fc2ce4e408c09a.scope: Deactivated successfully.
Dec  4 05:51:28 np0005545273 podman[260243]: 2025-12-04 10:51:28.829219359 +0000 UTC m=+0.040551239 container create 9de9f6f574ded453480b49b21ccb13d2bae8e13b8a54b5d05ba8420da61c925c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_shtern, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  4 05:51:28 np0005545273 systemd[1]: Started libpod-conmon-9de9f6f574ded453480b49b21ccb13d2bae8e13b8a54b5d05ba8420da61c925c.scope.
Dec  4 05:51:28 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:51:28 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d151c5d75b8b6fc0ea62794a9eda85ed66cf5e5ed69683338dbadf0c40634253/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:51:28 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d151c5d75b8b6fc0ea62794a9eda85ed66cf5e5ed69683338dbadf0c40634253/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:51:28 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d151c5d75b8b6fc0ea62794a9eda85ed66cf5e5ed69683338dbadf0c40634253/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:51:28 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d151c5d75b8b6fc0ea62794a9eda85ed66cf5e5ed69683338dbadf0c40634253/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:51:28 np0005545273 podman[260243]: 2025-12-04 10:51:28.81304543 +0000 UTC m=+0.024377330 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:51:28 np0005545273 podman[260243]: 2025-12-04 10:51:28.914403535 +0000 UTC m=+0.125735435 container init 9de9f6f574ded453480b49b21ccb13d2bae8e13b8a54b5d05ba8420da61c925c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  4 05:51:28 np0005545273 podman[260243]: 2025-12-04 10:51:28.921306086 +0000 UTC m=+0.132637976 container start 9de9f6f574ded453480b49b21ccb13d2bae8e13b8a54b5d05ba8420da61c925c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_shtern, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:51:28 np0005545273 podman[260243]: 2025-12-04 10:51:28.925730274 +0000 UTC m=+0.137062154 container attach 9de9f6f574ded453480b49b21ccb13d2bae8e13b8a54b5d05ba8420da61c925c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_shtern, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec  4 05:51:29 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:51:29 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:51:29 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1266: 321 pgs: 321 active+clean; 78 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 56 KiB/s wr, 4 op/s
Dec  4 05:51:29 np0005545273 lvm[260340]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:51:29 np0005545273 lvm[260341]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:51:29 np0005545273 lvm[260341]: VG ceph_vg1 finished
Dec  4 05:51:29 np0005545273 lvm[260340]: VG ceph_vg0 finished
Dec  4 05:51:29 np0005545273 lvm[260343]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:51:29 np0005545273 lvm[260343]: VG ceph_vg2 finished
Dec  4 05:51:29 np0005545273 recursing_shtern[260260]: {}
Dec  4 05:51:29 np0005545273 systemd[1]: libpod-9de9f6f574ded453480b49b21ccb13d2bae8e13b8a54b5d05ba8420da61c925c.scope: Deactivated successfully.
Dec  4 05:51:29 np0005545273 systemd[1]: libpod-9de9f6f574ded453480b49b21ccb13d2bae8e13b8a54b5d05ba8420da61c925c.scope: Consumed 1.494s CPU time.
Dec  4 05:51:29 np0005545273 podman[260243]: 2025-12-04 10:51:29.843325385 +0000 UTC m=+1.054657265 container died 9de9f6f574ded453480b49b21ccb13d2bae8e13b8a54b5d05ba8420da61c925c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec  4 05:51:29 np0005545273 systemd[1]: var-lib-containers-storage-overlay-d151c5d75b8b6fc0ea62794a9eda85ed66cf5e5ed69683338dbadf0c40634253-merged.mount: Deactivated successfully.
Dec  4 05:51:29 np0005545273 podman[260243]: 2025-12-04 10:51:29.889569593 +0000 UTC m=+1.100901483 container remove 9de9f6f574ded453480b49b21ccb13d2bae8e13b8a54b5d05ba8420da61c925c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_shtern, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:51:29 np0005545273 systemd[1]: libpod-conmon-9de9f6f574ded453480b49b21ccb13d2bae8e13b8a54b5d05ba8420da61c925c.scope: Deactivated successfully.
Dec  4 05:51:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:51:29 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:51:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:51:29 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:51:30 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:51:30 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "772141e6-25b5-4706-b9c3-ba13ee143838", "format": "json"}]: dispatch
Dec  4 05:51:30 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:772141e6-25b5-4706-b9c3-ba13ee143838, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:51:30 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:772141e6-25b5-4706-b9c3-ba13ee143838, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:51:30 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '772141e6-25b5-4706-b9c3-ba13ee143838' of type subvolume
Dec  4 05:51:30 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:51:30.709+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '772141e6-25b5-4706-b9c3-ba13ee143838' of type subvolume
Dec  4 05:51:30 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "772141e6-25b5-4706-b9c3-ba13ee143838", "force": true, "format": "json"}]: dispatch
Dec  4 05:51:30 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:772141e6-25b5-4706-b9c3-ba13ee143838, vol_name:cephfs) < ""
Dec  4 05:51:30 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/772141e6-25b5-4706-b9c3-ba13ee143838'' moved to trashcan
Dec  4 05:51:30 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:51:30 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:772141e6-25b5-4706-b9c3-ba13ee143838, vol_name:cephfs) < ""
Dec  4 05:51:30 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:51:30 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:51:31 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1267: 321 pgs: 321 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 80 KiB/s wr, 4 op/s
Dec  4 05:51:33 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1268: 321 pgs: 321 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 58 KiB/s wr, 4 op/s
Dec  4 05:51:34 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "48e0e8d9-0ebb-4db4-a173-73e6b17560ed", "snap_name": "2c9f33a3-8987-4579-986d-04d3f23eb0e2_d4dcbd7c-4c46-40c4-8e22-44ffaaee1088", "force": true, "format": "json"}]: dispatch
Dec  4 05:51:34 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2c9f33a3-8987-4579-986d-04d3f23eb0e2_d4dcbd7c-4c46-40c4-8e22-44ffaaee1088, sub_name:48e0e8d9-0ebb-4db4-a173-73e6b17560ed, vol_name:cephfs) < ""
Dec  4 05:51:35 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:51:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/48e0e8d9-0ebb-4db4-a173-73e6b17560ed/.meta.tmp'
Dec  4 05:51:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/48e0e8d9-0ebb-4db4-a173-73e6b17560ed/.meta.tmp' to config b'/volumes/_nogroup/48e0e8d9-0ebb-4db4-a173-73e6b17560ed/.meta'
Dec  4 05:51:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2c9f33a3-8987-4579-986d-04d3f23eb0e2_d4dcbd7c-4c46-40c4-8e22-44ffaaee1088, sub_name:48e0e8d9-0ebb-4db4-a173-73e6b17560ed, vol_name:cephfs) < ""
Dec  4 05:51:35 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "48e0e8d9-0ebb-4db4-a173-73e6b17560ed", "snap_name": "2c9f33a3-8987-4579-986d-04d3f23eb0e2", "force": true, "format": "json"}]: dispatch
Dec  4 05:51:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2c9f33a3-8987-4579-986d-04d3f23eb0e2, sub_name:48e0e8d9-0ebb-4db4-a173-73e6b17560ed, vol_name:cephfs) < ""
Dec  4 05:51:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/48e0e8d9-0ebb-4db4-a173-73e6b17560ed/.meta.tmp'
Dec  4 05:51:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/48e0e8d9-0ebb-4db4-a173-73e6b17560ed/.meta.tmp' to config b'/volumes/_nogroup/48e0e8d9-0ebb-4db4-a173-73e6b17560ed/.meta'
Dec  4 05:51:35 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2c9f33a3-8987-4579-986d-04d3f23eb0e2, sub_name:48e0e8d9-0ebb-4db4-a173-73e6b17560ed, vol_name:cephfs) < ""
Dec  4 05:51:35 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1269: 321 pgs: 321 active+clean; 78 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 57 KiB/s wr, 3 op/s
Dec  4 05:51:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:51:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:51:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:51:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:51:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:51:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:51:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:51:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:51:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:51:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:51:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006660926060230384 of space, bias 1.0, pg target 0.19982778180691152 quantized to 32 (current 32)
Dec  4 05:51:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:51:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0006045553998927892 of space, bias 4.0, pg target 0.725466479871347 quantized to 16 (current 32)
Dec  4 05:51:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:51:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  4 05:51:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:51:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:51:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:51:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 05:51:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:51:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:51:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:51:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 05:51:37 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1270: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 KiB/s wr, 5 op/s
Dec  4 05:51:38 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "48e0e8d9-0ebb-4db4-a173-73e6b17560ed", "format": "json"}]: dispatch
Dec  4 05:51:38 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:48e0e8d9-0ebb-4db4-a173-73e6b17560ed, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:51:38 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:48e0e8d9-0ebb-4db4-a173-73e6b17560ed, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec  4 05:51:38 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:51:38.288+0000 7f8423c95640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '48e0e8d9-0ebb-4db4-a173-73e6b17560ed' of type subvolume
Dec  4 05:51:38 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '48e0e8d9-0ebb-4db4-a173-73e6b17560ed' of type subvolume
Dec  4 05:51:38 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "48e0e8d9-0ebb-4db4-a173-73e6b17560ed", "force": true, "format": "json"}]: dispatch
Dec  4 05:51:38 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:48e0e8d9-0ebb-4db4-a173-73e6b17560ed, vol_name:cephfs) < ""
Dec  4 05:51:38 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/48e0e8d9-0ebb-4db4-a173-73e6b17560ed'' moved to trashcan
Dec  4 05:51:38 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  4 05:51:38 np0005545273 ceph-mgr[75651]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:48e0e8d9-0ebb-4db4-a173-73e6b17560ed, vol_name:cephfs) < ""
Dec  4 05:51:39 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1271: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 52 KiB/s wr, 3 op/s
Dec  4 05:51:40 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:51:40 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Dec  4 05:51:40 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Dec  4 05:51:40 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Dec  4 05:51:42 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1273: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 195 B/s rd, 56 KiB/s wr, 4 op/s
Dec  4 05:51:43 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:51:43.070 156095 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'aa:78:67', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:d2:c7:24:ee:78'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  4 05:51:43 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:51:43.071 156095 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  4 05:51:44 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1274: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 585 B/s rd, 57 KiB/s wr, 5 op/s
Dec  4 05:51:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:51:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Dec  4 05:51:45 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Dec  4 05:51:45 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Dec  4 05:51:46 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1276: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 481 B/s rd, 32 KiB/s wr, 3 op/s
Dec  4 05:51:47 np0005545273 podman[260386]: 2025-12-04 10:51:47.973072796 +0000 UTC m=+0.072085885 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  4 05:51:48 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1277: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 481 B/s rd, 43 KiB/s wr, 3 op/s
Dec  4 05:51:48 np0005545273 nova_compute[244644]: 2025-12-04 10:51:48.334 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:51:48 np0005545273 nova_compute[244644]: 2025-12-04 10:51:48.377 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:51:49 np0005545273 nova_compute[244644]: 2025-12-04 10:51:49.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:51:49 np0005545273 nova_compute[244644]: 2025-12-04 10:51:49.340 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  4 05:51:49 np0005545273 nova_compute[244644]: 2025-12-04 10:51:49.341 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  4 05:51:49 np0005545273 nova_compute[244644]: 2025-12-04 10:51:49.360 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  4 05:51:50 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1278: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 441 B/s rd, 39 KiB/s wr, 3 op/s
Dec  4 05:51:50 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:51:50 np0005545273 nova_compute[244644]: 2025-12-04 10:51:50.337 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:51:50 np0005545273 nova_compute[244644]: 2025-12-04 10:51:50.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:51:50 np0005545273 nova_compute[244644]: 2025-12-04 10:51:50.371 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:51:50 np0005545273 nova_compute[244644]: 2025-12-04 10:51:50.372 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:51:50 np0005545273 nova_compute[244644]: 2025-12-04 10:51:50.372 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:51:50 np0005545273 nova_compute[244644]: 2025-12-04 10:51:50.372 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  4 05:51:50 np0005545273 nova_compute[244644]: 2025-12-04 10:51:50.373 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:51:50 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:51:50 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3778203738' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:51:51 np0005545273 nova_compute[244644]: 2025-12-04 10:51:51.020 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:51:51 np0005545273 nova_compute[244644]: 2025-12-04 10:51:51.177 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  4 05:51:51 np0005545273 nova_compute[244644]: 2025-12-04 10:51:51.178 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5002MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  4 05:51:51 np0005545273 nova_compute[244644]: 2025-12-04 10:51:51.179 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:51:51 np0005545273 nova_compute[244644]: 2025-12-04 10:51:51.179 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:51:51 np0005545273 nova_compute[244644]: 2025-12-04 10:51:51.246 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  4 05:51:51 np0005545273 nova_compute[244644]: 2025-12-04 10:51:51.246 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  4 05:51:51 np0005545273 nova_compute[244644]: 2025-12-04 10:51:51.260 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:51:51 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:51:51 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1439446977' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:51:51 np0005545273 nova_compute[244644]: 2025-12-04 10:51:51.820 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:51:51 np0005545273 nova_compute[244644]: 2025-12-04 10:51:51.827 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  4 05:51:51 np0005545273 nova_compute[244644]: 2025-12-04 10:51:51.843 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  4 05:51:51 np0005545273 nova_compute[244644]: 2025-12-04 10:51:51.845 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  4 05:51:51 np0005545273 nova_compute[244644]: 2025-12-04 10:51:51.845 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:51:52 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1279: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 10 KiB/s wr, 2 op/s
Dec  4 05:51:53 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:51:53.073 156095 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=565580d5-3422-4e11-b563-3f1a3db67238, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  4 05:51:54 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1280: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s wr, 0 op/s
Dec  4 05:51:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:51:54.920 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:51:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:51:54.921 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:51:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:51:54.921 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:51:55 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:51:55 np0005545273 nova_compute[244644]: 2025-12-04 10:51:55.845 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:51:55 np0005545273 nova_compute[244644]: 2025-12-04 10:51:55.846 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:51:55 np0005545273 nova_compute[244644]: 2025-12-04 10:51:55.846 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:51:55 np0005545273 nova_compute[244644]: 2025-12-04 10:51:55.846 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:51:55 np0005545273 nova_compute[244644]: 2025-12-04 10:51:55.846 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  4 05:51:56 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1281: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 0 op/s
Dec  4 05:51:56 np0005545273 podman[260452]: 2025-12-04 10:51:56.99002671 +0000 UTC m=+0.054692397 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  4 05:51:57 np0005545273 podman[260451]: 2025-12-04 10:51:57.016267445 +0000 UTC m=+0.083394953 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible)
Dec  4 05:51:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:51:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:51:58 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1282: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 8.1 KiB/s wr, 0 op/s
Dec  4 05:51:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:51:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:51:59 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:51:59 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:51:59 np0005545273 nova_compute[244644]: 2025-12-04 10:51:59.340 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:52:00 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1283: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:52:00 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:52:02 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1284: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:52:04 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1285: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:52:05 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:52:06 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1286: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:52:08 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1287: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:52:10 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1288: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:52:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  4 05:52:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/497393934' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec  4 05:52:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  4 05:52:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/497393934' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec  4 05:52:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:52:12 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1289: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:52:14 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1290: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:52:16 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1291: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:52:16 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:52:18 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1292: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:52:18 np0005545273 podman[260499]: 2025-12-04 10:52:18.962880042 +0000 UTC m=+0.071847639 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  4 05:52:20 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1293: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:52:21 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:52:22 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1294: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:52:24 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1295: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:52:26 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1296: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:52:26 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:52:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:52:26
Dec  4 05:52:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:52:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:52:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'images', 'default.rgw.meta', '.mgr', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'backups', 'volumes']
Dec  4 05:52:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:52:27 np0005545273 podman[260520]: 2025-12-04 10:52:27.968148708 +0000 UTC m=+0.075367696 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec  4 05:52:27 np0005545273 podman[260519]: 2025-12-04 10:52:27.977029996 +0000 UTC m=+0.086839268 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:52:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:52:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:52:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:52:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:52:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:52:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:52:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:52:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:52:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:52:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:52:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:52:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:52:28 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1297: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:52:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:52:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:52:29 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:52:29 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:52:30 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1298: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:52:30 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:52:30 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:52:30 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:52:30 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:52:30 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:52:31 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:52:31 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:52:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:52:31 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:52:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:52:31 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:52:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:52:31 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:52:31 np0005545273 podman[260704]: 2025-12-04 10:52:31.632484645 +0000 UTC m=+0.024596767 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:52:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:52:32 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1299: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:52:32 np0005545273 podman[260704]: 2025-12-04 10:52:32.643525146 +0000 UTC m=+1.035637228 container create 37e0a71ded473a366fcc48856271807570ecbf62a6e10c4c8dd3aabc4995b87c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:52:33 np0005545273 systemd[1]: Started libpod-conmon-37e0a71ded473a366fcc48856271807570ecbf62a6e10c4c8dd3aabc4995b87c.scope.
Dec  4 05:52:33 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:52:33 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:52:33 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:52:33 np0005545273 podman[260704]: 2025-12-04 10:52:33.103284811 +0000 UTC m=+1.495396923 container init 37e0a71ded473a366fcc48856271807570ecbf62a6e10c4c8dd3aabc4995b87c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_ritchie, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:52:33 np0005545273 podman[260704]: 2025-12-04 10:52:33.113087692 +0000 UTC m=+1.505199774 container start 37e0a71ded473a366fcc48856271807570ecbf62a6e10c4c8dd3aabc4995b87c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_ritchie, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:52:33 np0005545273 pensive_ritchie[260721]: 167 167
Dec  4 05:52:33 np0005545273 systemd[1]: libpod-37e0a71ded473a366fcc48856271807570ecbf62a6e10c4c8dd3aabc4995b87c.scope: Deactivated successfully.
Dec  4 05:52:33 np0005545273 podman[260704]: 2025-12-04 10:52:33.414575733 +0000 UTC m=+1.806687845 container attach 37e0a71ded473a366fcc48856271807570ecbf62a6e10c4c8dd3aabc4995b87c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  4 05:52:33 np0005545273 podman[260704]: 2025-12-04 10:52:33.416847058 +0000 UTC m=+1.808959140 container died 37e0a71ded473a366fcc48856271807570ecbf62a6e10c4c8dd3aabc4995b87c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_ritchie, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  4 05:52:34 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1300: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:52:34 np0005545273 systemd[1]: var-lib-containers-storage-overlay-b1771b206f30bf7be91d3b0213eac521dcdc6fb30bbb5a858a71a75f44875822-merged.mount: Deactivated successfully.
Dec  4 05:52:34 np0005545273 podman[260704]: 2025-12-04 10:52:34.947252231 +0000 UTC m=+3.339364313 container remove 37e0a71ded473a366fcc48856271807570ecbf62a6e10c4c8dd3aabc4995b87c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec  4 05:52:34 np0005545273 systemd[1]: libpod-conmon-37e0a71ded473a366fcc48856271807570ecbf62a6e10c4c8dd3aabc4995b87c.scope: Deactivated successfully.
Dec  4 05:52:35 np0005545273 podman[260746]: 2025-12-04 10:52:35.129575777 +0000 UTC m=+0.055703661 container create 97cdff515636b6ca765229e052f3ad39e2973d306d8465cff132f1c184e9e890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  4 05:52:35 np0005545273 systemd[1]: Started libpod-conmon-97cdff515636b6ca765229e052f3ad39e2973d306d8465cff132f1c184e9e890.scope.
Dec  4 05:52:35 np0005545273 podman[260746]: 2025-12-04 10:52:35.100303038 +0000 UTC m=+0.026430942 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:52:35 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:52:35 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebae2a4c5221c6eaf6e257c53c5d6439593f7f294a5c883601306e4e04d02806/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:52:35 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebae2a4c5221c6eaf6e257c53c5d6439593f7f294a5c883601306e4e04d02806/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:52:35 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebae2a4c5221c6eaf6e257c53c5d6439593f7f294a5c883601306e4e04d02806/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:52:35 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebae2a4c5221c6eaf6e257c53c5d6439593f7f294a5c883601306e4e04d02806/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:52:35 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebae2a4c5221c6eaf6e257c53c5d6439593f7f294a5c883601306e4e04d02806/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:52:35 np0005545273 podman[260746]: 2025-12-04 10:52:35.29140542 +0000 UTC m=+0.217533324 container init 97cdff515636b6ca765229e052f3ad39e2973d306d8465cff132f1c184e9e890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hertz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:52:35 np0005545273 podman[260746]: 2025-12-04 10:52:35.303611351 +0000 UTC m=+0.229739275 container start 97cdff515636b6ca765229e052f3ad39e2973d306d8465cff132f1c184e9e890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:52:35 np0005545273 podman[260746]: 2025-12-04 10:52:35.336812767 +0000 UTC m=+0.262940671 container attach 97cdff515636b6ca765229e052f3ad39e2973d306d8465cff132f1c184e9e890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hertz, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec  4 05:52:35 np0005545273 vibrant_hertz[260762]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:52:35 np0005545273 vibrant_hertz[260762]: --> All data devices are unavailable
Dec  4 05:52:35 np0005545273 systemd[1]: libpod-97cdff515636b6ca765229e052f3ad39e2973d306d8465cff132f1c184e9e890.scope: Deactivated successfully.
Dec  4 05:52:35 np0005545273 podman[260746]: 2025-12-04 10:52:35.856689422 +0000 UTC m=+0.782817306 container died 97cdff515636b6ca765229e052f3ad39e2973d306d8465cff132f1c184e9e890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec  4 05:52:35 np0005545273 systemd[1]: var-lib-containers-storage-overlay-ebae2a4c5221c6eaf6e257c53c5d6439593f7f294a5c883601306e4e04d02806-merged.mount: Deactivated successfully.
Dec  4 05:52:36 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1301: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:52:36 np0005545273 podman[260746]: 2025-12-04 10:52:36.278376559 +0000 UTC m=+1.204504443 container remove 97cdff515636b6ca765229e052f3ad39e2973d306d8465cff132f1c184e9e890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hertz, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec  4 05:52:36 np0005545273 systemd[1]: libpod-conmon-97cdff515636b6ca765229e052f3ad39e2973d306d8465cff132f1c184e9e890.scope: Deactivated successfully.
Dec  4 05:52:36 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:52:36 np0005545273 podman[260857]: 2025-12-04 10:52:36.834576046 +0000 UTC m=+0.109410683 container create fdd3cf295b9880277424788460e76aa1ad32d9ee09ed2d310c54eb9d60e055a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_bohr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:52:36 np0005545273 podman[260857]: 2025-12-04 10:52:36.756450283 +0000 UTC m=+0.031284960 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:52:36 np0005545273 systemd[1]: Started libpod-conmon-fdd3cf295b9880277424788460e76aa1ad32d9ee09ed2d310c54eb9d60e055a1.scope.
Dec  4 05:52:36 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:52:36 np0005545273 podman[260857]: 2025-12-04 10:52:36.939935589 +0000 UTC m=+0.214770216 container init fdd3cf295b9880277424788460e76aa1ad32d9ee09ed2d310c54eb9d60e055a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  4 05:52:36 np0005545273 podman[260857]: 2025-12-04 10:52:36.948660694 +0000 UTC m=+0.223495291 container start fdd3cf295b9880277424788460e76aa1ad32d9ee09ed2d310c54eb9d60e055a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_bohr, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:52:36 np0005545273 podman[260857]: 2025-12-04 10:52:36.953501903 +0000 UTC m=+0.228336540 container attach fdd3cf295b9880277424788460e76aa1ad32d9ee09ed2d310c54eb9d60e055a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_bohr, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  4 05:52:36 np0005545273 gifted_bohr[260873]: 167 167
Dec  4 05:52:36 np0005545273 systemd[1]: libpod-fdd3cf295b9880277424788460e76aa1ad32d9ee09ed2d310c54eb9d60e055a1.scope: Deactivated successfully.
Dec  4 05:52:36 np0005545273 podman[260857]: 2025-12-04 10:52:36.955701597 +0000 UTC m=+0.230536194 container died fdd3cf295b9880277424788460e76aa1ad32d9ee09ed2d310c54eb9d60e055a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_bohr, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec  4 05:52:36 np0005545273 systemd[1]: var-lib-containers-storage-overlay-87566954bff5e4b129eea4a60436009a47d65ef463aa126a06b6887c962ccbf0-merged.mount: Deactivated successfully.
Dec  4 05:52:37 np0005545273 podman[260857]: 2025-12-04 10:52:37.01025758 +0000 UTC m=+0.285092167 container remove fdd3cf295b9880277424788460e76aa1ad32d9ee09ed2d310c54eb9d60e055a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_bohr, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  4 05:52:37 np0005545273 systemd[1]: libpod-conmon-fdd3cf295b9880277424788460e76aa1ad32d9ee09ed2d310c54eb9d60e055a1.scope: Deactivated successfully.
Dec  4 05:52:37 np0005545273 podman[260897]: 2025-12-04 10:52:37.217373457 +0000 UTC m=+0.056447891 container create 6134a4e64ad839ca9e1fee64b5b520221a6ae3ee66825826b687d83ab633fd0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_vaughan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec  4 05:52:37 np0005545273 systemd[1]: Started libpod-conmon-6134a4e64ad839ca9e1fee64b5b520221a6ae3ee66825826b687d83ab633fd0f.scope.
Dec  4 05:52:37 np0005545273 podman[260897]: 2025-12-04 10:52:37.188849835 +0000 UTC m=+0.027924309 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:52:37 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:52:37 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2517bf9786a193c413fa38cde8cdd18bd05fe78c4bf25d458a3ae5f69dffa5b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:52:37 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2517bf9786a193c413fa38cde8cdd18bd05fe78c4bf25d458a3ae5f69dffa5b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:52:37 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2517bf9786a193c413fa38cde8cdd18bd05fe78c4bf25d458a3ae5f69dffa5b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:52:37 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2517bf9786a193c413fa38cde8cdd18bd05fe78c4bf25d458a3ae5f69dffa5b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:52:37 np0005545273 podman[260897]: 2025-12-04 10:52:37.325641591 +0000 UTC m=+0.164716035 container init 6134a4e64ad839ca9e1fee64b5b520221a6ae3ee66825826b687d83ab633fd0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_vaughan, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  4 05:52:37 np0005545273 podman[260897]: 2025-12-04 10:52:37.337237576 +0000 UTC m=+0.176312010 container start 6134a4e64ad839ca9e1fee64b5b520221a6ae3ee66825826b687d83ab633fd0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  4 05:52:37 np0005545273 podman[260897]: 2025-12-04 10:52:37.341628434 +0000 UTC m=+0.180702878 container attach 6134a4e64ad839ca9e1fee64b5b520221a6ae3ee66825826b687d83ab633fd0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_vaughan, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec  4 05:52:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:52:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:52:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:52:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:52:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:52:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:52:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:52:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:52:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:52:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:52:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006660929475746917 of space, bias 1.0, pg target 0.19982788427240752 quantized to 32 (current 32)
Dec  4 05:52:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:52:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0006150786373821282 of space, bias 4.0, pg target 0.7380943648585538 quantized to 16 (current 32)
Dec  4 05:52:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:52:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Dec  4 05:52:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:52:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:52:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:52:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 05:52:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:52:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:52:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:52:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]: {
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:    "0": [
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:        {
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            "devices": [
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "/dev/loop3"
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            ],
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            "lv_name": "ceph_lv0",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            "lv_size": "21470642176",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            "name": "ceph_lv0",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            "tags": {
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.cluster_name": "ceph",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.crush_device_class": "",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.encrypted": "0",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.objectstore": "bluestore",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.osd_id": "0",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.type": "block",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.vdo": "0",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.with_tpm": "0"
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            },
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            "type": "block",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            "vg_name": "ceph_vg0"
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:        }
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:    ],
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:    "1": [
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:        {
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            "devices": [
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "/dev/loop4"
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            ],
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            "lv_name": "ceph_lv1",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            "lv_size": "21470642176",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            "name": "ceph_lv1",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            "tags": {
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.cluster_name": "ceph",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.crush_device_class": "",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.encrypted": "0",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.objectstore": "bluestore",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.osd_id": "1",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.type": "block",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.vdo": "0",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.with_tpm": "0"
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            },
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            "type": "block",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            "vg_name": "ceph_vg1"
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:        }
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:    ],
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:    "2": [
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:        {
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            "devices": [
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "/dev/loop5"
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            ],
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            "lv_name": "ceph_lv2",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            "lv_size": "21470642176",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            "name": "ceph_lv2",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            "tags": {
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.cluster_name": "ceph",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.crush_device_class": "",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.encrypted": "0",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.objectstore": "bluestore",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.osd_id": "2",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.type": "block",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.vdo": "0",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:                "ceph.with_tpm": "0"
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            },
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            "type": "block",
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:            "vg_name": "ceph_vg2"
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:        }
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]:    ]
Dec  4 05:52:37 np0005545273 vibrant_vaughan[260913]: }
Dec  4 05:52:37 np0005545273 systemd[1]: libpod-6134a4e64ad839ca9e1fee64b5b520221a6ae3ee66825826b687d83ab633fd0f.scope: Deactivated successfully.
Dec  4 05:52:37 np0005545273 podman[260897]: 2025-12-04 10:52:37.698183599 +0000 UTC m=+0.537258093 container died 6134a4e64ad839ca9e1fee64b5b520221a6ae3ee66825826b687d83ab633fd0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_vaughan, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:52:37 np0005545273 systemd[1]: var-lib-containers-storage-overlay-2517bf9786a193c413fa38cde8cdd18bd05fe78c4bf25d458a3ae5f69dffa5b4-merged.mount: Deactivated successfully.
Dec  4 05:52:37 np0005545273 podman[260897]: 2025-12-04 10:52:37.747753989 +0000 UTC m=+0.586828423 container remove 6134a4e64ad839ca9e1fee64b5b520221a6ae3ee66825826b687d83ab633fd0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_vaughan, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec  4 05:52:37 np0005545273 systemd[1]: libpod-conmon-6134a4e64ad839ca9e1fee64b5b520221a6ae3ee66825826b687d83ab633fd0f.scope: Deactivated successfully.
Dec  4 05:52:38 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1302: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:52:38 np0005545273 podman[260995]: 2025-12-04 10:52:38.255008413 +0000 UTC m=+0.045251126 container create ce2ab492ff3d4056f417d80dcba063a58ad5663e2faf99e529591c9d9b5a8f8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  4 05:52:38 np0005545273 systemd[1]: Started libpod-conmon-ce2ab492ff3d4056f417d80dcba063a58ad5663e2faf99e529591c9d9b5a8f8d.scope.
Dec  4 05:52:38 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:52:38 np0005545273 podman[260995]: 2025-12-04 10:52:38.236166128 +0000 UTC m=+0.026408881 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:52:38 np0005545273 podman[260995]: 2025-12-04 10:52:38.343664414 +0000 UTC m=+0.133907137 container init ce2ab492ff3d4056f417d80dcba063a58ad5663e2faf99e529591c9d9b5a8f8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  4 05:52:38 np0005545273 podman[260995]: 2025-12-04 10:52:38.397705724 +0000 UTC m=+0.187948447 container start ce2ab492ff3d4056f417d80dcba063a58ad5663e2faf99e529591c9d9b5a8f8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec  4 05:52:38 np0005545273 podman[260995]: 2025-12-04 10:52:38.401691952 +0000 UTC m=+0.191934785 container attach ce2ab492ff3d4056f417d80dcba063a58ad5663e2faf99e529591c9d9b5a8f8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_tesla, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Dec  4 05:52:38 np0005545273 nice_tesla[261012]: 167 167
Dec  4 05:52:38 np0005545273 systemd[1]: libpod-ce2ab492ff3d4056f417d80dcba063a58ad5663e2faf99e529591c9d9b5a8f8d.scope: Deactivated successfully.
Dec  4 05:52:38 np0005545273 conmon[261012]: conmon ce2ab492ff3d4056f417 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ce2ab492ff3d4056f417d80dcba063a58ad5663e2faf99e529591c9d9b5a8f8d.scope/container/memory.events
Dec  4 05:52:38 np0005545273 podman[260995]: 2025-12-04 10:52:38.407232188 +0000 UTC m=+0.197474911 container died ce2ab492ff3d4056f417d80dcba063a58ad5663e2faf99e529591c9d9b5a8f8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_tesla, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:52:38 np0005545273 systemd[1]: var-lib-containers-storage-overlay-ab39e0a9b4fd886286e8703f3b43d179c3f084de2d4b2b4e019e00305ed92c5b-merged.mount: Deactivated successfully.
Dec  4 05:52:38 np0005545273 podman[260995]: 2025-12-04 10:52:38.448227958 +0000 UTC m=+0.238470681 container remove ce2ab492ff3d4056f417d80dcba063a58ad5663e2faf99e529591c9d9b5a8f8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_tesla, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  4 05:52:38 np0005545273 systemd[1]: libpod-conmon-ce2ab492ff3d4056f417d80dcba063a58ad5663e2faf99e529591c9d9b5a8f8d.scope: Deactivated successfully.
Dec  4 05:52:38 np0005545273 systemd-logind[798]: New session 52 of user zuul.
Dec  4 05:52:38 np0005545273 systemd[1]: Started Session 52 of User zuul.
Dec  4 05:52:38 np0005545273 podman[261039]: 2025-12-04 10:52:38.604210466 +0000 UTC m=+0.029147569 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:52:38 np0005545273 podman[261039]: 2025-12-04 10:52:38.804166707 +0000 UTC m=+0.229103790 container create 34ce59b00851faa8f7c0bddaae442dd231a93c8482c1c8a4144d3897024a51e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  4 05:52:39 np0005545273 systemd[1]: Started libpod-conmon-34ce59b00851faa8f7c0bddaae442dd231a93c8482c1c8a4144d3897024a51e2.scope.
Dec  4 05:52:39 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:52:39 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42dd504754fd873a2d8a89e91c47fde75318520c585395de70c489e38c1580f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:52:39 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42dd504754fd873a2d8a89e91c47fde75318520c585395de70c489e38c1580f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:52:39 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42dd504754fd873a2d8a89e91c47fde75318520c585395de70c489e38c1580f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:52:39 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42dd504754fd873a2d8a89e91c47fde75318520c585395de70c489e38c1580f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:52:39 np0005545273 podman[261039]: 2025-12-04 10:52:39.772380654 +0000 UTC m=+1.197317737 container init 34ce59b00851faa8f7c0bddaae442dd231a93c8482c1c8a4144d3897024a51e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:52:39 np0005545273 podman[261039]: 2025-12-04 10:52:39.78360501 +0000 UTC m=+1.208542113 container start 34ce59b00851faa8f7c0bddaae442dd231a93c8482c1c8a4144d3897024a51e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  4 05:52:39 np0005545273 podman[261039]: 2025-12-04 10:52:39.788187413 +0000 UTC m=+1.213124496 container attach 34ce59b00851faa8f7c0bddaae442dd231a93c8482c1c8a4144d3897024a51e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_dubinsky, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec  4 05:52:40 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1303: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:52:40 np0005545273 lvm[261240]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:52:40 np0005545273 lvm[261240]: VG ceph_vg0 finished
Dec  4 05:52:40 np0005545273 lvm[261241]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:52:40 np0005545273 lvm[261241]: VG ceph_vg1 finished
Dec  4 05:52:40 np0005545273 lvm[261243]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:52:40 np0005545273 lvm[261243]: VG ceph_vg2 finished
Dec  4 05:52:40 np0005545273 optimistic_dubinsky[261091]: {}
Dec  4 05:52:40 np0005545273 systemd[1]: libpod-34ce59b00851faa8f7c0bddaae442dd231a93c8482c1c8a4144d3897024a51e2.scope: Deactivated successfully.
Dec  4 05:52:40 np0005545273 systemd[1]: libpod-34ce59b00851faa8f7c0bddaae442dd231a93c8482c1c8a4144d3897024a51e2.scope: Consumed 1.596s CPU time.
Dec  4 05:52:40 np0005545273 podman[261039]: 2025-12-04 10:52:40.767036852 +0000 UTC m=+2.191973915 container died 34ce59b00851faa8f7c0bddaae442dd231a93c8482c1c8a4144d3897024a51e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec  4 05:52:40 np0005545273 systemd[1]: var-lib-containers-storage-overlay-42dd504754fd873a2d8a89e91c47fde75318520c585395de70c489e38c1580f0-merged.mount: Deactivated successfully.
Dec  4 05:52:40 np0005545273 podman[261039]: 2025-12-04 10:52:40.850116716 +0000 UTC m=+2.275053779 container remove 34ce59b00851faa8f7c0bddaae442dd231a93c8482c1c8a4144d3897024a51e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  4 05:52:40 np0005545273 systemd[1]: libpod-conmon-34ce59b00851faa8f7c0bddaae442dd231a93c8482c1c8a4144d3897024a51e2.scope: Deactivated successfully.
Dec  4 05:52:40 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:52:40 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:52:40 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:52:40 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:52:41 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:52:41 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:52:41 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14526 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 05:52:41 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:52:42 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1304: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:52:42 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14528 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 05:52:43 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Dec  4 05:52:43 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1269093128' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec  4 05:52:44 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1305: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:52:46 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1306: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:52:46 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:52:48 np0005545273 ovs-vsctl[261505]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec  4 05:52:48 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1307: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:52:48 np0005545273 nova_compute[244644]: 2025-12-04 10:52:48.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:52:49 np0005545273 virtqemud[244380]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec  4 05:52:49 np0005545273 virtqemud[244380]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec  4 05:52:49 np0005545273 virtqemud[244380]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec  4 05:52:49 np0005545273 nova_compute[244644]: 2025-12-04 10:52:49.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:52:49 np0005545273 nova_compute[244644]: 2025-12-04 10:52:49.340 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  4 05:52:49 np0005545273 nova_compute[244644]: 2025-12-04 10:52:49.340 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  4 05:52:49 np0005545273 nova_compute[244644]: 2025-12-04 10:52:49.392 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  4 05:52:49 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: cache status {prefix=cache status} (starting...)
Dec  4 05:52:49 np0005545273 podman[261771]: 2025-12-04 10:52:49.663159292 +0000 UTC m=+0.067395730 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:52:49 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: client ls {prefix=client ls} (starting...)
Dec  4 05:52:50 np0005545273 lvm[261884]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:52:50 np0005545273 lvm[261884]: VG ceph_vg1 finished
Dec  4 05:52:50 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1308: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:52:50 np0005545273 lvm[261895]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:52:50 np0005545273 lvm[261895]: VG ceph_vg0 finished
Dec  4 05:52:50 np0005545273 lvm[261901]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:52:50 np0005545273 lvm[261901]: VG ceph_vg2 finished
Dec  4 05:52:50 np0005545273 nova_compute[244644]: 2025-12-04 10:52:50.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:52:50 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14532 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 05:52:50 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: damage ls {prefix=damage ls} (starting...)
Dec  4 05:52:50 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: dump loads {prefix=dump loads} (starting...)
Dec  4 05:52:50 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec  4 05:52:50 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14534 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 05:52:50 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec  4 05:52:51 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec  4 05:52:51 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec  4 05:52:51 np0005545273 nova_compute[244644]: 2025-12-04 10:52:51.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:52:51 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14538 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 05:52:51 np0005545273 nova_compute[244644]: 2025-12-04 10:52:51.381 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:52:51 np0005545273 nova_compute[244644]: 2025-12-04 10:52:51.381 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:52:51 np0005545273 nova_compute[244644]: 2025-12-04 10:52:51.382 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:52:51 np0005545273 nova_compute[244644]: 2025-12-04 10:52:51.382 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  4 05:52:51 np0005545273 nova_compute[244644]: 2025-12-04 10:52:51.382 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:52:51 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Dec  4 05:52:51 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1380433314' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec  4 05:52:51 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec  4 05:52:51 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec  4 05:52:51 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:52:51 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14540 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 05:52:51 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:52:51.945+0000 7f8454576640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  4 05:52:51 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  4 05:52:52 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:52:52 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3428355298' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:52:52 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: ops {prefix=ops} (starting...)
Dec  4 05:52:52 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:52:52 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2274008835' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:52:52 np0005545273 nova_compute[244644]: 2025-12-04 10:52:52.091 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.709s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:52:52 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1309: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:52:52 np0005545273 nova_compute[244644]: 2025-12-04 10:52:52.273 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  4 05:52:52 np0005545273 nova_compute[244644]: 2025-12-04 10:52:52.274 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4873MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  4 05:52:52 np0005545273 nova_compute[244644]: 2025-12-04 10:52:52.274 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:52:52 np0005545273 nova_compute[244644]: 2025-12-04 10:52:52.274 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:52:52 np0005545273 nova_compute[244644]: 2025-12-04 10:52:52.382 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  4 05:52:52 np0005545273 nova_compute[244644]: 2025-12-04 10:52:52.383 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  4 05:52:52 np0005545273 nova_compute[244644]: 2025-12-04 10:52:52.402 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:52:52 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Dec  4 05:52:52 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1207267331' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Dec  4 05:52:52 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Dec  4 05:52:52 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/545602561' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Dec  4 05:52:52 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session ls {prefix=session ls} (starting...)
Dec  4 05:52:52 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: status {prefix=status} (starting...)
Dec  4 05:52:52 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:52:52 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/597533027' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:52:53 np0005545273 nova_compute[244644]: 2025-12-04 10:52:53.016 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:52:53 np0005545273 nova_compute[244644]: 2025-12-04 10:52:53.022 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  4 05:52:53 np0005545273 nova_compute[244644]: 2025-12-04 10:52:53.050 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  4 05:52:53 np0005545273 nova_compute[244644]: 2025-12-04 10:52:53.052 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  4 05:52:53 np0005545273 nova_compute[244644]: 2025-12-04 10:52:53.052 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:52:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec  4 05:52:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/702401482' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec  4 05:52:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Dec  4 05:52:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/138267846' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Dec  4 05:52:53 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec  4 05:52:53 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1015691080' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec  4 05:52:53 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14558 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 05:52:54 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14561 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 05:52:54 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1310: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:52:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec  4 05:52:54 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3491425011' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec  4 05:52:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Dec  4 05:52:54 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2828374942' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec  4 05:52:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  4 05:52:54 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3639261421' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec  4 05:52:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:52:54.921 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:52:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:52:54.922 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:52:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:52:54.922 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:52:55 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Dec  4 05:52:55 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1389546349' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Dec  4 05:52:55 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec  4 05:52:55 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2545059339' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec  4 05:52:55 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14572 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 05:52:55 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:52:55.754+0000 7f8454576640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  4 05:52:55 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  4 05:52:55 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec  4 05:52:55 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1154090365' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec  4 05:52:56 np0005545273 nova_compute[244644]: 2025-12-04 10:52:56.053 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:52:56 np0005545273 nova_compute[244644]: 2025-12-04 10:52:56.053 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:52:56 np0005545273 nova_compute[244644]: 2025-12-04 10:52:56.054 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:52:56 np0005545273 nova_compute[244644]: 2025-12-04 10:52:56.054 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  4 05:52:56 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1311: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:52:56 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14578 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 05:52:56 np0005545273 nova_compute[244644]: 2025-12-04 10:52:56.333 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:52:56 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Dec  4 05:52:56 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/784620337' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Dec  4 05:52:56 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14580 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 05:52:56 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 1605632 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919620 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 1597440 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 1597440 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 1597440 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 1572864 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 1564672 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924444 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 1556480 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 1556480 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.e scrub starts
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.e scrub ok
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 1556480 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70877184 unmapped: 1548288 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.960282326s of 10.977203369s, submitted: 8
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 1523712 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931681 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 1523712 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 1515520 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70926336 unmapped: 1499136 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70918144 unmapped: 1507328 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70926336 unmapped: 1499136 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934094 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70926336 unmapped: 1499136 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70926336 unmapped: 1499136 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 1490944 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 1490944 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.030856133s of 10.057851791s, submitted: 8
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 1482752 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938916 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 1482752 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 1474560 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.c scrub starts
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.c scrub ok
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 1474560 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.f scrub starts
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.f scrub ok
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 1474560 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70959104 unmapped: 1466368 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943738 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70959104 unmapped: 1466368 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 1458176 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 1458176 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 1449984 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 1449984 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 1449984 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 1441792 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 1441792 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 1417216 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 1409024 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 1409024 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 1400832 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 1400832 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 1392640 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 1392640 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 1392640 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 1384448 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 1384448 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71049216 unmapped: 1376256 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71049216 unmapped: 1376256 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 1368064 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 1368064 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 1359872 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 1359872 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71073792 unmapped: 1351680 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71073792 unmapped: 1351680 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 1343488 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 1343488 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 1310720 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 1310720 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 1302528 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 1302528 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 1302528 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71131136 unmapped: 1294336 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71131136 unmapped: 1294336 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71139328 unmapped: 1286144 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71139328 unmapped: 1286144 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71139328 unmapped: 1286144 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 1277952 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 1277952 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 1277952 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 1269760 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 1269760 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 1253376 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 1253376 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71180288 unmapped: 1245184 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71180288 unmapped: 1245184 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 1228800 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71204864 unmapped: 1220608 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 1212416 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 1212416 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 1204224 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 1204224 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 1204224 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 1196032 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 1196032 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 1196032 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 1187840 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 1187840 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 1179648 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 1179648 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1171456 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1171456 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1171456 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 1163264 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 1163264 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 1163264 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1155072 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1155072 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1146880 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1146880 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1146880 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1146880 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1146880 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 1138688 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 1138688 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 1138688 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 1130496 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 1130496 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 1122304 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 1122304 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 1122304 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 1114112 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 1114112 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 1105920 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 1105920 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 1105920 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71327744 unmapped: 1097728 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71327744 unmapped: 1097728 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71327744 unmapped: 1097728 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 1089536 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 1089536 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 1081344 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 1081344 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 1081344 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 1073152 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 1073152 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 1073152 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 1064960 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 1064960 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 1056768 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 1056768 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 1048576 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 1048576 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 1048576 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 1040384 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 1040384 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 1040384 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 1032192 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71409664 unmapped: 1015808 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 1007616 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 1007616 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 1007616 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 999424 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 999424 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 991232 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 991232 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 991232 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71442432 unmapped: 983040 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71442432 unmapped: 983040 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71450624 unmapped: 974848 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71450624 unmapped: 974848 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 966656 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71467008 unmapped: 958464 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71467008 unmapped: 958464 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 950272 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 950272 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 950272 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 942080 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 942080 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 933888 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 933888 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 933888 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71499776 unmapped: 925696 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71499776 unmapped: 925696 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71499776 unmapped: 925696 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 917504 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 917504 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 909312 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 909312 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71524352 unmapped: 901120 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71524352 unmapped: 901120 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71524352 unmapped: 901120 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71532544 unmapped: 892928 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71532544 unmapped: 892928 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 884736 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 884736 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 876544 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 868352 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 868352 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 868352 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71565312 unmapped: 860160 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71565312 unmapped: 860160 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71565312 unmapped: 860160 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 851968 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 851968 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 843776 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 843776 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71589888 unmapped: 835584 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71589888 unmapped: 835584 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 827392 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 827392 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 827392 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 827392 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 819200 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 819200 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71614464 unmapped: 811008 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71614464 unmapped: 811008 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 802816 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 802816 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 794624 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 794624 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71614464 unmapped: 811008 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71614464 unmapped: 811008 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 802816 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 802816 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 794624 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 794624 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 794624 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 786432 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 786432 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 778240 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 778240 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 778240 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 770048 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 770048 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71663616 unmapped: 761856 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71663616 unmapped: 761856 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71663616 unmapped: 761856 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 753664 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 753664 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 745472 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 745472 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 745472 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71688192 unmapped: 737280 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71688192 unmapped: 737280 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71688192 unmapped: 737280 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71712768 unmapped: 712704 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 704512 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 704512 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 704512 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 696320 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 704512 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 704512 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 696320 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 696320 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 688128 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 688128 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 688128 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71745536 unmapped: 679936 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71745536 unmapped: 679936 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 671744 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 671744 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 663552 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 663552 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 663552 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 663552 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71770112 unmapped: 655360 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71770112 unmapped: 655360 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 647168 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 647168 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 647168 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 638976 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 638976 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71794688 unmapped: 630784 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71794688 unmapped: 630784 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 622592 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 622592 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 622592 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 614400 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 614400 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 614400 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 598016 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 598016 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71835648 unmapped: 589824 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71835648 unmapped: 589824 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71835648 unmapped: 589824 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71843840 unmapped: 581632 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71843840 unmapped: 581632 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71852032 unmapped: 573440 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71852032 unmapped: 573440 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71852032 unmapped: 573440 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 557056 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 557056 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71876608 unmapped: 548864 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71876608 unmapped: 548864 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71876608 unmapped: 548864 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71884800 unmapped: 540672 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71884800 unmapped: 540672 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71892992 unmapped: 532480 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71892992 unmapped: 532480 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71892992 unmapped: 532480 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71909376 unmapped: 516096 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71909376 unmapped: 516096 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 507904 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 507904 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 507904 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71925760 unmapped: 499712 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71925760 unmapped: 499712 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 491520 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 491520 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 491520 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 475136 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 475136 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 466944 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 466944 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 466944 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 458752 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 458752 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 450560 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 450560 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 450560 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 442368 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 442368 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 434176 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 434176 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 434176 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71999488 unmapped: 425984 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71999488 unmapped: 425984 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 71999488 unmapped: 425984 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 417792 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 417792 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 417792 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 409600 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 409600 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 401408 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 401408 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 393216 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 393216 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 385024 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 376832 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 376832 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 368640 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 368640 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 368640 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 360448 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 360448 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 352256 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5475 writes, 24K keys, 5475 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5475 writes, 788 syncs, 6.95 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5475 writes, 24K keys, 5475 commit groups, 1.0 writes per commit group, ingest: 18.45 MB, 0.03 MB/s#012Interval WAL: 5475 writes, 788 syncs, 6.95 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 286720 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 286720 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 278528 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 278528 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 270336 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 270336 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 270336 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 262144 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 262144 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 253952 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 253952 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 245760 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 245760 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 245760 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 237568 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 237568 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 229376 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 229376 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 229376 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 221184 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 221184 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 212992 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 212992 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 212992 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 204800 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 204800 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 196608 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 196608 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 204800 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 196608 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 196608 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 196608 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 188416 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 188416 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 180224 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 180224 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 172032 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 172032 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 172032 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72261632 unmapped: 163840 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72261632 unmapped: 163840 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 155648 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 155648 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 155648 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 147456 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 147456 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 147456 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 139264 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 139264 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72294400 unmapped: 131072 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72302592 unmapped: 122880 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72310784 unmapped: 114688 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72310784 unmapped: 114688 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72310784 unmapped: 114688 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 106496 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 106496 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 106496 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 98304 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 98304 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72335360 unmapped: 90112 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72335360 unmapped: 90112 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72335360 unmapped: 90112 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72343552 unmapped: 81920 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72351744 unmapped: 73728 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72351744 unmapped: 73728 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 65536 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 65536 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72368128 unmapped: 57344 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72368128 unmapped: 57344 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72368128 unmapped: 57344 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 49152 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 49152 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 49152 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 40960 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 40960 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 380.932373047s of 381.029602051s, submitted: 8
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 32768 heap: 72425472 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [1])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 1884160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 1884160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 1884160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 1884160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 1884160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 1884160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 1875968 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 1875968 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 1867776 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 1867776 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 1859584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 1859584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 1851392 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 1843200 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 1843200 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 1835008 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 1835008 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 1826816 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 1818624 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 1818624 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 1810432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 1810432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 1810432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 1802240 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 1802240 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 1794048 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 1794048 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 1785856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 1785856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 1785856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 1777664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 1777664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 1769472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 1769472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 1761280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 1761280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 1761280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 1753088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 1753088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 1753088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 1744896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 1744896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 1736704 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 1736704 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 1736704 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 1736704 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 1736704 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 1728512 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 1728512 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 1728512 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 1728512 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 1728512 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 1720320 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 1720320 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 1720320 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 1720320 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 1720320 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 1720320 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 1720320 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 1720320 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 1720320 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 1720320 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 1703936 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 1703936 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 1703936 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 1703936 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 1695744 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 1695744 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 1695744 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 1695744 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 1695744 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 1695744 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 1687552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 1687552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 1687552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 1687552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 1687552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 1687552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 1687552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 1679360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 1679360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 1679360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 1679360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 1679360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 1679360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 1679360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 1679360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 1671168 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 1671168 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 1671168 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 1671168 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 1671168 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 1679360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 1679360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 1679360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 1679360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 1679360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 1679360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 1679360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 1679360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 1679360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 1679360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 1679360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 1671168 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 1671168 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 1671168 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 1671168 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1654784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 1646592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 1646592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 1646592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 1646592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 1646592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 1638400 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 1638400 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 1638400 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 1638400 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 1630208 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 1630208 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 1630208 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 1630208 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 1630208 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 1630208 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 1597440 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 1597440 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 1597440 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 1597440 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 1597440 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 1597440 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 1572864 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 1556480 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 1548288 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 1548288 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 1548288 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: mgrc ms_handle_reset ms_handle_reset con 0x55c0a3a34000
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/762197634
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: mgrc handle_mgr_configure stats_period=5
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 1269760 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 1269760 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 299.998474121s of 300.141143799s, submitted: 90
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 1024000 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 950272 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 950272 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 950272 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 917504 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 901120 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 901120 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 901120 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 901120 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 901120 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 892928 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 892928 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 892928 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 892928 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 843776 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 843776 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 827392 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 827392 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 827392 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 827392 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 802816 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 802816 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 802816 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 802816 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 802816 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 794624 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 794624 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 778240 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 778240 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 778240 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 778240 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 778240 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 770048 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 770048 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread fragmentation_score=0.000134 took=0.000054s
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1727248690' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5703 writes, 24K keys, 5703 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5703 writes, 902 syncs, 6.32 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowd
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 688128 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 688128 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 688128 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 688128 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 688128 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 688128 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 688128 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 688128 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 671744 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 671744 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 671744 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 671744 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 671744 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 655360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 655360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 655360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 655360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 655360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 638976 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 638976 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 299.694915771s of 299.933593750s, submitted: 24
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 573440 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 221184 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 204800 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 188416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 188416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 188416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 188416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 188416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 188416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 188416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 188416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 188416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:56 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 117 handle_osd_map epochs [117,118], i have 117, src has [1,118]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 116.999755859s of 117.139999390s, submitted: 90
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75579392 unmapped: 1040384 heap: 76619776 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcebe000/0x0/0x4ffc00000, data 0xab840/0x16c000, compress 0x0/0x0/0x0, omap 0x11ab8, meta 0x2bbe548), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 991232 heap: 76619776 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 119 handle_osd_map epochs [120,120], i have 119, src has [1,120]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 120 ms_handle_reset con 0x55c0a3fee800 session 0x55c0a401ec40
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 9330688 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983916 data_alloc: 218103808 data_used: 3520
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x51efe8/0x5e2000, compress 0x0/0x0/0x0, omap 0x11dfd, meta 0x2bbe203), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 9175040 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 120 handle_osd_map epochs [121,121], i have 120, src has [1,121]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 9134080 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 121 ms_handle_reset con 0x55c0a2e4e400 session 0x55c0a5490380
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 121 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x520bc3/0x5e6000, compress 0x0/0x0/0x0, omap 0x11e1f, meta 0x2bbe1e1), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988171 data_alloc: 218103808 data_used: 4105
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 121 handle_osd_map epochs [121,122], i have 121, src has [1,122]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x520bc3/0x5e6000, compress 0x0/0x0/0x0, omap 0x11e1f, meta 0x2bbe1e1), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990705 data_alloc: 218103808 data_used: 4105
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fca41000/0x0/0x4ffc00000, data 0x522642/0x5e9000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990705 data_alloc: 218103808 data_used: 4105
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fca41000/0x0/0x4ffc00000, data 0x522642/0x5e9000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 22.107776642s of 22.230192184s, submitted: 58
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fca41000/0x0/0x4ffc00000, data 0x522642/0x5e9000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993025 data_alloc: 218103808 data_used: 4105
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 9158656 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Got map version 10
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76791808 unmapped: 9142272 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 9199616 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fca36000/0x0/0x4ffc00000, data 0x52d5d7/0x5f6000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 9199616 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fca35000/0x0/0x4ffc00000, data 0x52e85e/0x5f7000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 9027584 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995567 data_alloc: 218103808 data_used: 4105
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 8994816 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 8994816 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 8945664 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fca29000/0x0/0x4ffc00000, data 0x53abca/0x603000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 8945664 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fca29000/0x0/0x4ffc00000, data 0x53abca/0x603000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 8765440 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.814142227s of 10.116048813s, submitted: 35
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999813 data_alloc: 218103808 data_used: 4105
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 8650752 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Got map version 11
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 8470528 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 8470528 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 8273920 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x558809/0x622000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 8151040 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001261 data_alloc: 218103808 data_used: 4105
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 78987264 unmapped: 6946816 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 79028224 unmapped: 6905856 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 79151104 unmapped: 6782976 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 79249408 unmapped: 6684672 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fc9e7000/0x0/0x4ffc00000, data 0x57b223/0x645000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 79249408 unmapped: 6684672 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.159253120s of 10.109436035s, submitted: 78
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006011 data_alloc: 218103808 data_used: 4105
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 6619136 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 123 handle_osd_map epochs [123,124], i have 123, src has [1,124]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fc9e5000/0x0/0x4ffc00000, data 0x57def9/0x647000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 5390336 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 5390336 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 5447680 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 5447680 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fc9d2000/0x0/0x4ffc00000, data 0x590256/0x65a000, compress 0x0/0x0/0x0, omap 0x11ebb, meta 0x2bbe145), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008761 data_alloc: 218103808 data_used: 4105
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 5210112 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 5210112 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fc9c8000/0x0/0x4ffc00000, data 0x59a3b3/0x664000, compress 0x0/0x0/0x0, omap 0x11ebb, meta 0x2bbe145), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 5193728 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 5193728 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 5406720 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009539 data_alloc: 218103808 data_used: 4105
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 5406720 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 5406720 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fc9bc000/0x0/0x4ffc00000, data 0x5a6634/0x670000, compress 0x0/0x0/0x0, omap 0x11ebb, meta 0x2bbe145), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.569020271s of 12.741366386s, submitted: 41
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 5406720 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 5406720 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 5365760 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fc9b3000/0x0/0x4ffc00000, data 0x5af14c/0x679000, compress 0x0/0x0/0x0, omap 0x11ebb, meta 0x2bbe145), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006899 data_alloc: 218103808 data_used: 4105
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80633856 unmapped: 5300224 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80642048 unmapped: 5292032 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80683008 unmapped: 5251072 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 5029888 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 5029888 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011691 data_alloc: 218103808 data_used: 4105
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 5029888 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fc996000/0x0/0x4ffc00000, data 0x5cac75/0x696000, compress 0x0/0x0/0x0, omap 0x11ebb, meta 0x2bbe145), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 5021696 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.882642746s of 10.000583649s, submitted: 38
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 2809856 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 124 handle_osd_map epochs [124,125], i have 124, src has [1,125]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 1728512 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84303872 unmapped: 1630208 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1014373 data_alloc: 218103808 data_used: 4105
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 1556480 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 1417216 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fb7ca000/0x0/0x4ffc00000, data 0x5f65bb/0x6c2000, compress 0x0/0x0/0x0, omap 0x11f29, meta 0x3d5e0d7), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 1245184 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 1245184 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 1056768 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1017817 data_alloc: 218103808 data_used: 4105
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 958464 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 958464 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.733018875s of 10.001555443s, submitted: 91
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 950272 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fb7a8000/0x0/0x4ffc00000, data 0x61635f/0x6e2000, compress 0x0/0x0/0x0, omap 0x11faa, meta 0x3d5e056), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 917504 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 917504 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1019529 data_alloc: 218103808 data_used: 4105
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 1957888 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fb78d000/0x0/0x4ffc00000, data 0x633c4e/0x6ff000, compress 0x0/0x0/0x0, omap 0x11faa, meta 0x3d5e056), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 1949696 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 1949696 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fb77b000/0x0/0x4ffc00000, data 0x64574b/0x711000, compress 0x0/0x0/0x0, omap 0x11faa, meta 0x3d5e056), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 1826816 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fb77b000/0x0/0x4ffc00000, data 0x64574b/0x711000, compress 0x0/0x0/0x0, omap 0x11faa, meta 0x3d5e056), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fb778000/0x0/0x4ffc00000, data 0x6480c0/0x714000, compress 0x0/0x0/0x0, omap 0x11faa, meta 0x3d5e056), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1802240 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1023333 data_alloc: 218103808 data_used: 4105
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1728512 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1728512 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.848536491s of 10.001356125s, submitted: 29
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1703936 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1703936 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fb763000/0x0/0x4ffc00000, data 0x65cfdb/0x729000, compress 0x0/0x0/0x0, omap 0x11faa, meta 0x3d5e056), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 1867776 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1022057 data_alloc: 218103808 data_used: 4105
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85172224 unmapped: 1810432 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85172224 unmapped: 1810432 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 1744896 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 401408 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fb72d000/0x0/0x4ffc00000, data 0x68e752/0x75f000, compress 0x0/0x0/0x0, omap 0x11faa, meta 0x3d5e056), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 999424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041309 data_alloc: 218103808 data_used: 4105
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Got map version 12
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 87048192 unmapped: 983040 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 1196032 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.589168549s of 10.002529144s, submitted: 57
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fb703000/0x0/0x4ffc00000, data 0x6ba903/0x789000, compress 0x0/0x0/0x0, omap 0x12010, meta 0x3d5dff0), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 1105920 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 1073152 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 1015808 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038997 data_alloc: 218103808 data_used: 4260
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fb6da000/0x0/0x4ffc00000, data 0x6e2609/0x7b2000, compress 0x0/0x0/0x0, omap 0x12010, meta 0x3d5dff0), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 86712320 unmapped: 1318912 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 86712320 unmapped: 1318912 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fb6bc000/0x0/0x4ffc00000, data 0x6ffebe/0x7d0000, compress 0x0/0x0/0x0, omap 0x12010, meta 0x3d5dff0), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 1179648 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 2170880 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 884736 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fb669000/0x0/0x4ffc00000, data 0x74b4c0/0x81f000, compress 0x0/0x0/0x0, omap 0x12010, meta 0x3d5dff0), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059855 data_alloc: 218103808 data_used: 4260
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 679936 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 129 handle_osd_map epochs [129,130], i have 129, src has [1,130]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 89464832 unmapped: 663552 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.600092888s of 10.000102997s, submitted: 172
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 1277952 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 90284032 unmapped: 892928 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 89513984 unmapped: 1662976 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063631 data_alloc: 218103808 data_used: 4105
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 89546752 unmapped: 1630208 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb622000/0x0/0x4ffc00000, data 0x7907b7/0x866000, compress 0x0/0x0/0x0, omap 0x12520, meta 0x3d5dae0), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 90038272 unmapped: 1138688 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fb5e9000/0x0/0x4ffc00000, data 0x7ca62d/0x8a1000, compress 0x0/0x0/0x0, omap 0x12680, meta 0x3d5d980), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 90152960 unmapped: 2072576 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 90251264 unmapped: 1974272 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 90472448 unmapped: 1753088 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075351 data_alloc: 218103808 data_used: 4755
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 90472448 unmapped: 1753088 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 90054656 unmapped: 2170880 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.681773186s of 10.032649994s, submitted: 160
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91299840 unmapped: 925696 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb5b5000/0x0/0x4ffc00000, data 0x7fe7a5/0x8d7000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91357184 unmapped: 868352 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb5b1000/0x0/0x4ffc00000, data 0x802d67/0x8db000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91365376 unmapped: 860160 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076485 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 688128 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 688128 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 688128 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb5b1000/0x0/0x4ffc00000, data 0x803215/0x8db000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91619328 unmapped: 606208 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91619328 unmapped: 606208 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb5a3000/0x0/0x4ffc00000, data 0x8109fa/0x8e9000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075757 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91619328 unmapped: 606208 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb5a3000/0x0/0x4ffc00000, data 0x8109fa/0x8e9000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91734016 unmapped: 491520 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.903874397s of 10.266777992s, submitted: 19
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91742208 unmapped: 483328 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 442368 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91930624 unmapped: 294912 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb580000/0x0/0x4ffc00000, data 0x833caa/0x90c000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1078121 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91357184 unmapped: 868352 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91357184 unmapped: 868352 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91504640 unmapped: 1769472 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 1736704 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 1736704 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb556000/0x0/0x4ffc00000, data 0x85dc84/0x936000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079017 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91545600 unmapped: 1728512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb556000/0x0/0x4ffc00000, data 0x85dc84/0x936000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb556000/0x0/0x4ffc00000, data 0x85dc84/0x936000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91545600 unmapped: 1728512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb556000/0x0/0x4ffc00000, data 0x85dc84/0x936000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb546000/0x0/0x4ffc00000, data 0x86d830/0x946000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91578368 unmapped: 1695744 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91578368 unmapped: 1695744 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91578368 unmapped: 1695744 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080545 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91578368 unmapped: 1695744 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb546000/0x0/0x4ffc00000, data 0x86d830/0x946000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91578368 unmapped: 1695744 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91578368 unmapped: 1695744 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91578368 unmapped: 1695744 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.004943848s of 16.979648590s, submitted: 22
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91217920 unmapped: 2056192 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb539000/0x0/0x4ffc00000, data 0x87b015/0x953000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1081409 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91455488 unmapped: 1818624 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91455488 unmapped: 1818624 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91111424 unmapped: 2162688 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91111424 unmapped: 2162688 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91111424 unmapped: 2162688 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082297 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91111424 unmapped: 2162688 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb510000/0x0/0x4ffc00000, data 0x8a3c6f/0x97c000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91111424 unmapped: 2162688 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92340224 unmapped: 933888 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb4f3000/0x0/0x4ffc00000, data 0x8c1190/0x999000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92504064 unmapped: 770048 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92504064 unmapped: 770048 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb4d8000/0x0/0x4ffc00000, data 0x8db7f0/0x9b4000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084117 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.924418449s of 11.061837196s, submitted: 25
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92504064 unmapped: 770048 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92504064 unmapped: 770048 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92512256 unmapped: 1810432 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92512256 unmapped: 1810432 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 136 ms_handle_reset con 0x55c0a5b1a400 session 0x55c0a5b048c0
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93093888 unmapped: 2277376 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 136 ms_handle_reset con 0x55c0a450e000 session 0x55c0a5f96700
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085581 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb4c4000/0x0/0x4ffc00000, data 0x8ee753/0x9c8000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93093888 unmapped: 2277376 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Got map version 13
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb4c4000/0x0/0x4ffc00000, data 0x8ee8b9/0x9c8000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 2195456 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb492000/0x0/0x4ffc00000, data 0x9215f8/0x9fa000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb493000/0x0/0x4ffc00000, data 0x92155d/0x9f9000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090467 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.560784340s of 10.244839668s, submitted: 209
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 2424832 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 2424832 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92979200 unmapped: 2392064 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089339 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb47a000/0x0/0x4ffc00000, data 0x93a666/0xa12000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087675 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb47a000/0x0/0x4ffc00000, data 0x93a666/0xa12000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.419944763s of 14.577485085s, submitted: 11
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087819 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb47a000/0x0/0x4ffc00000, data 0x93a666/0xa12000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089351 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb479000/0x0/0x4ffc00000, data 0x93a701/0xa13000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089207 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.268507957s of 10.285860062s, submitted: 5
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb479000/0x0/0x4ffc00000, data 0x93a701/0xa13000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088649 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb47a000/0x0/0x4ffc00000, data 0x93a666/0xa12000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090309 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.970705986s of 10.010634422s, submitted: 8
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb479000/0x0/0x4ffc00000, data 0x93a666/0xa12000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fb475000/0x0/0x4ffc00000, data 0x93c26b/0xa15000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fb475000/0x0/0x4ffc00000, data 0x93c26b/0xa15000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092127 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93102080 unmapped: 2269184 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93102080 unmapped: 2269184 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dcea/0xa18000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93118464 unmapped: 2252800 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93118464 unmapped: 2252800 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93118464 unmapped: 2252800 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dcea/0xa18000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094885 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 2220032 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 2220032 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.480938911s of 11.537956238s, submitted: 43
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 2220032 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 2220032 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 2220032 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dcea/0xa18000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095029 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dcea/0xa18000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 2220032 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb471000/0x0/0x4ffc00000, data 0x93de27/0xa1a000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb471000/0x0/0x4ffc00000, data 0x93de27/0xa1a000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 2220032 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097421 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 2220032 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.449153900s of 10.473722458s, submitted: 15
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb471000/0x0/0x4ffc00000, data 0x93dcea/0xa18000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1096815 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb473000/0x0/0x4ffc00000, data 0x93dd85/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 2203648 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 2203648 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 2203648 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 2203648 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93ddf9/0xa1a000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93184000 unmapped: 2187264 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93ddf9/0xa1a000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098363 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93216768 unmapped: 2154496 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93216768 unmapped: 2154496 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.951936722s of 10.007729530s, submitted: 8
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 2146304 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb471000/0x0/0x4ffc00000, data 0x93ddb5/0xa1a000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 2146304 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097773 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb473000/0x0/0x4ffc00000, data 0x93dd85/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb473000/0x0/0x4ffc00000, data 0x93dd85/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097773 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb473000/0x0/0x4ffc00000, data 0x93dd85/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.995441437s of 11.008138657s, submitted: 5
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb474000/0x0/0x4ffc00000, data 0x93dcea/0xa18000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb474000/0x0/0x4ffc00000, data 0x93dcea/0xa18000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097773 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 2367488 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb473000/0x0/0x4ffc00000, data 0x93dd5e/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 2367488 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 2367488 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097645 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dd1a/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 2367488 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 2367488 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93011968 unmapped: 2359296 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dd1a/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93011968 unmapped: 2359296 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93011968 unmapped: 2359296 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097789 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93011968 unmapped: 2359296 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dd5e/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.616669655s of 12.638894081s, submitted: 13
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dd1a/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93011968 unmapped: 2359296 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dd1a/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93011968 unmapped: 2359296 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93011968 unmapped: 2359296 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93020160 unmapped: 2351104 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dd5e/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097805 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93052928 unmapped: 2318336 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93052928 unmapped: 2318336 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dd5e/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097645 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dd1a/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93118464 unmapped: 2252800 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.879154205s of 10.907876015s, submitted: 15
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 2236416 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 2228224 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 2228224 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099337 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 2228224 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 2228224 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb471000/0x0/0x4ffc00000, data 0x93dde1/0xa1a000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 2195456 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 2203648 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 2203648 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93ddb5/0xa1a000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099177 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 2203648 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 2203648 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 2195456 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 2195456 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.439765930s of 12.481030464s, submitted: 20
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb473000/0x0/0x4ffc00000, data 0x93dd85/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 2195456 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098603 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 2195456 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb473000/0x0/0x4ffc00000, data 0x93dd85/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 2195456 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 2195456 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93184000 unmapped: 2187264 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93184000 unmapped: 2187264 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103199 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93192192 unmapped: 2179072 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fb46e000/0x0/0x4ffc00000, data 0x93f98a/0xa1c000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93192192 unmapped: 2179072 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94248960 unmapped: 1122304 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 1114112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 1114112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104299 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 1114112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fb46f000/0x0/0x4ffc00000, data 0x93fa25/0xa1d000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.015718460s of 12.077057838s, submitted: 32
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109053 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fb469000/0x0/0x4ffc00000, data 0x94153f/0xa21000, compress 0x0/0x0/0x0, omap 0x12b53, meta 0x3d5d4ad), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109881 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fb46a000/0x0/0x4ffc00000, data 0x9415da/0xa22000, compress 0x0/0x0/0x0, omap 0x12b53, meta 0x3d5d4ad), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.393504143s of 10.408122063s, submitted: 19
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fb469000/0x0/0x4ffc00000, data 0x941608/0xa22000, compress 0x0/0x0/0x0, omap 0x12b53, meta 0x3d5d4ad), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 1089536 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 1089536 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1112547 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 1089536 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fb468000/0x0/0x4ffc00000, data 0x9416a3/0xa23000, compress 0x0/0x0/0x0, omap 0x12b53, meta 0x3d5d4ad), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 1089536 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94289920 unmapped: 1081344 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94289920 unmapped: 1081344 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94289920 unmapped: 1081344 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111653 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 1073152 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fb46a000/0x0/0x4ffc00000, data 0x941608/0xa22000, compress 0x0/0x0/0x0, omap 0x12b53, meta 0x3d5d4ad), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 1073152 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 1073152 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.508604050s of 11.536386490s, submitted: 13
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 1073152 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 1073152 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1112619 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94306304 unmapped: 1064960 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fb46b000/0x0/0x4ffc00000, data 0x94153f/0xa21000, compress 0x0/0x0/0x0, omap 0x12ca4, meta 0x3d5d35c), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94322688 unmapped: 1048576 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94330880 unmapped: 1040384 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94347264 unmapped: 2072576 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94347264 unmapped: 2072576 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121177 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94388224 unmapped: 2031616 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fb445000/0x0/0x4ffc00000, data 0x964870/0xa47000, compress 0x0/0x0/0x0, omap 0x12ca4, meta 0x3d5d35c), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 95461376 unmapped: 958464 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 95674368 unmapped: 745472 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.857433319s of 10.014651299s, submitted: 97
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 95682560 unmapped: 1785856 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 95526912 unmapped: 1941504 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 142 handle_osd_map epochs [142,143], i have 142, src has [1,143]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135499 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 95584256 unmapped: 1884160 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fb3c4000/0x0/0x4ffc00000, data 0x9e0112/0xac6000, compress 0x0/0x0/0x0, omap 0x12cf6, meta 0x3d5d30a), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 95592448 unmapped: 1875968 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fb3b9000/0x0/0x4ffc00000, data 0x9eacc4/0xad1000, compress 0x0/0x0/0x0, omap 0x12cf6, meta 0x3d5d30a), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 1867776 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fb3ac000/0x0/0x4ffc00000, data 0x9f885a/0xade000, compress 0x0/0x0/0x0, omap 0x12cf6, meta 0x3d5d30a), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 96477184 unmapped: 991232 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 96477184 unmapped: 991232 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134715 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 827392 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 96616448 unmapped: 851968 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 96616448 unmapped: 851968 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fb366000/0x0/0x4ffc00000, data 0xa3fab4/0xb26000, compress 0x0/0x0/0x0, omap 0x12cf6, meta 0x3d5d30a), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 96813056 unmapped: 655360 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 96813056 unmapped: 655360 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139427 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.366982460s of 12.445398331s, submitted: 44
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97017856 unmapped: 1499136 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb366000/0x0/0x4ffc00000, data 0xa3fab4/0xb26000, compress 0x0/0x0/0x0, omap 0x12cf6, meta 0x3d5d30a), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97050624 unmapped: 1466368 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97050624 unmapped: 1466368 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97091584 unmapped: 1425408 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97992704 unmapped: 1572864 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149237 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fb2d4000/0x0/0x4ffc00000, data 0xacf83f/0xbb8000, compress 0x0/0x0/0x0, omap 0x12d7b, meta 0x3d5d285), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97992704 unmapped: 1572864 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97992704 unmapped: 1572864 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 145 handle_osd_map epochs [145,146], i have 145, src has [1,146]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97460224 unmapped: 3153920 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fb29d000/0x0/0x4ffc00000, data 0xb04a92/0xbed000, compress 0x0/0x0/0x0, omap 0x12d7b, meta 0x3d5d285), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97435648 unmapped: 3178496 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97435648 unmapped: 3178496 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157693 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97615872 unmapped: 2998272 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.291193008s of 10.482179642s, submitted: 116
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97976320 unmapped: 2637824 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fb217000/0x0/0x4ffc00000, data 0xb8911b/0xc73000, compress 0x0/0x0/0x0, omap 0x12dfc, meta 0x3d5d204), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 98099200 unmapped: 2514944 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 98541568 unmapped: 3121152 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 99811328 unmapped: 1851392 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174773 data_alloc: 218103808 data_used: 5091
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 99860480 unmapped: 1802240 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 148 handle_osd_map epochs [148,149], i have 148, src has [1,149]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fb19c000/0x0/0x4ffc00000, data 0xc00bf3/0xcee000, compress 0x0/0x0/0x0, omap 0x12ee7, meta 0x3d5d119), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 99565568 unmapped: 2097152 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 99672064 unmapped: 1990656 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fb16d000/0x0/0x4ffc00000, data 0xc32018/0xd1d000, compress 0x0/0x0/0x0, omap 0x12ee7, meta 0x3d5d119), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 100868096 unmapped: 1843200 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 101351424 unmapped: 2408448 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fb147000/0x0/0x4ffc00000, data 0xc594a7/0xd45000, compress 0x0/0x0/0x0, omap 0x12ee7, meta 0x3d5d119), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177381 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 101490688 unmapped: 2269184 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.723609924s of 10.026507378s, submitted: 124
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102588416 unmapped: 1171456 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fb132000/0x0/0x4ffc00000, data 0xc6f06d/0xd59000, compress 0x0/0x0/0x0, omap 0x12ee7, meta 0x3d5d119), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 149 handle_osd_map epochs [149,150], i have 149, src has [1,150]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102744064 unmapped: 2064384 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102744064 unmapped: 2064384 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102744064 unmapped: 2064384 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188495 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102916096 unmapped: 1892352 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102916096 unmapped: 1892352 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fb09d000/0x0/0x4ffc00000, data 0xd007a6/0xdec000, compress 0x0/0x0/0x0, omap 0x18254, meta 0x3d57dac), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102916096 unmapped: 1892352 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102916096 unmapped: 1892352 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fb09d000/0x0/0x4ffc00000, data 0xd007a6/0xdec000, compress 0x0/0x0/0x0, omap 0x18254, meta 0x3d57dac), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103022592 unmapped: 1785856 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192129 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fb067000/0x0/0x4ffc00000, data 0xd37a62/0xe24000, compress 0x0/0x0/0x0, omap 0x18254, meta 0x3d57dac), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103022592 unmapped: 1785856 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 1638400 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.634295464s of 11.785771370s, submitted: 94
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102637568 unmapped: 2170880 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102637568 unmapped: 2170880 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190545 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fb04c000/0x0/0x4ffc00000, data 0xd54161/0xe40000, compress 0x0/0x0/0x0, omap 0x18254, meta 0x3d57dac), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 151 handle_osd_map epochs [152,152], i have 152, src has [1,152]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190921 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.272990227s of 10.301798820s, submitted: 29
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192613 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0xd55bd5/0xe43000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194161 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0xd55c70/0xe44000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195709 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.117662430s of 13.127370834s, submitted: 5
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb045000/0x0/0x4ffc00000, data 0xd55e6c/0xe47000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb044000/0x0/0x4ffc00000, data 0xd55e6e/0xe47000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199109 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb044000/0x0/0x4ffc00000, data 0xd55e6e/0xe47000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 2146304 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 2146304 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 2146304 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198087 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb045000/0x0/0x4ffc00000, data 0xd55dd4/0xe46000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 2146304 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.867134094s of 10.890979767s, submitted: 14
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 2146304 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 2146304 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102563840 unmapped: 2244608 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb047000/0x0/0x4ffc00000, data 0xd55d37/0xe45000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102563840 unmapped: 2244608 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197035 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102563840 unmapped: 2244608 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102563840 unmapped: 2244608 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102563840 unmapped: 2244608 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 8989 writes, 34K keys, 8989 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 8989 writes, 2320 syncs, 3.87 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3286 writes, 10K keys, 3286 commit groups, 1.0 writes per commit group, ingest: 13.71 MB, 0.02 MB/s#012Interval WAL: 3286 writes, 1418 syncs, 2.32 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0xd55c9d/0xe44000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102563840 unmapped: 2244608 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102572032 unmapped: 2236416 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197419 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0xd55c00/0xe43000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102572032 unmapped: 2236416 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102572032 unmapped: 2236416 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.055023193s of 11.093473434s, submitted: 18
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 1187840 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 1187840 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196669 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196669 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196813 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.888220787s of 14.927642822s, submitted: 4
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196829 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 1171456 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 1171456 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 1171456 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 1171456 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 1171456 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198361 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 1171456 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Got map version 14
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103702528 unmapped: 1105920 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0xd55c4c/0xe43000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.001877785s of 10.011025429s, submitted: 5
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197819 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197835 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.001707077s of 10.005904198s, submitted: 3
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196685 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196813 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.169968605s of 10.174468994s, submitted: 2
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196813 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196685 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196829 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196829 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.824216843s of 17.850557327s, submitted: 6
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0xd55c02/0xe43000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198361 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103268352 unmapped: 1540096 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0xd55c9b/0xe44000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103268352 unmapped: 1540096 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103268352 unmapped: 1540096 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103276544 unmapped: 1531904 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103325696 unmapped: 1482752 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0xd55bd5/0xe43000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199031 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.812626839s of 10.004839897s, submitted: 95
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 2637824 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0xd55bd5/0xe43000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 2629632 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 2629632 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 2629632 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 2629632 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198585 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 2629632 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0xd55c02/0xe43000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 2629632 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0xd55c02/0xe43000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 2629632 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 2629632 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb047000/0x0/0x4ffc00000, data 0xd55c9d/0xe44000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103235584 unmapped: 2621440 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201969 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.108821869s of 10.326163292s, submitted: 42
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103243776 unmapped: 2613248 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0xd55c9b/0xe44000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 2605056 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0xd578a0/0xe47000, compress 0x0/0x0/0x0, omap 0x1885f, meta 0x3d577a1), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 2605056 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0xd57808/0xe46000, compress 0x0/0x0/0x0, omap 0x1885f, meta 0x3d577a1), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 2605056 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 2605056 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205847 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0xd57808/0xe46000, compress 0x0/0x0/0x0, omap 0x1885f, meta 0x3d577a1), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 2605056 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 2605056 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 2605056 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 2605056 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103268352 unmapped: 2588672 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208173 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fb041000/0x0/0x4ffc00000, data 0xd59285/0xe49000, compress 0x0/0x0/0x0, omap 0x18b4c, meta 0x3d574b4), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103268352 unmapped: 2588672 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.619541168s of 10.691827774s, submitted: 44
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103268352 unmapped: 2588672 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103268352 unmapped: 2588672 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0xd591be/0xe48000, compress 0x0/0x0/0x0, omap 0x18b4c, meta 0x3d574b4), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103268352 unmapped: 2588672 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0xd591be/0xe48000, compress 0x0/0x0/0x0, omap 0x18b4c, meta 0x3d574b4), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103268352 unmapped: 2588672 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207727 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103276544 unmapped: 2580480 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fb044000/0x0/0x4ffc00000, data 0xd591be/0xe48000, compress 0x0/0x0/0x0, omap 0x18b4c, meta 0x3d574b4), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103276544 unmapped: 2580480 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103276544 unmapped: 2580480 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103276544 unmapped: 2580480 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fb044000/0x0/0x4ffc00000, data 0xd591be/0xe48000, compress 0x0/0x0/0x0, omap 0x18b4c, meta 0x3d574b4), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103276544 unmapped: 2580480 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208843 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103276544 unmapped: 2580480 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103276544 unmapped: 2580480 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.884953499s of 11.015766144s, submitted: 6
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103276544 unmapped: 2580480 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 154 ms_handle_reset con 0x55c0a3fef800 session 0x55c0a3818380
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103481344 unmapped: 2375680 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103481344 unmapped: 2375680 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Got map version 15
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0xd59259/0xe49000, compress 0x0/0x0/0x0, omap 0x18b4c, meta 0x3d574b4), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208555 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 2293760 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 155 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0xd59259/0xe49000, compress 0x0/0x0/0x0, omap 0x18b4c, meta 0x3d574b4), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 2285568 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 2285568 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 2285568 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 155 heartbeat osd_stat(store_statfs(0x4fb03e000/0x0/0x4ffc00000, data 0xd5ae5e/0xe4c000, compress 0x0/0x0/0x0, omap 0x18dca, meta 0x3d57236), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 155 handle_osd_map epochs [155,156], i have 155, src has [1,156]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215781 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fb03b000/0x0/0x4ffc00000, data 0xd5c8dd/0xe4f000, compress 0x0/0x0/0x0, omap 0x190e0, meta 0x3d56f20), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fb03b000/0x0/0x4ffc00000, data 0xd5c8dd/0xe4f000, compress 0x0/0x0/0x0, omap 0x190e0, meta 0x3d56f20), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.581476212s of 12.955293655s, submitted: 224
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216753 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fb03e000/0x0/0x4ffc00000, data 0xd5c842/0xe4e000, compress 0x0/0x0/0x0, omap 0x190e0, meta 0x3d56f20), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fb03e000/0x0/0x4ffc00000, data 0xd5c842/0xe4e000, compress 0x0/0x0/0x0, omap 0x190e0, meta 0x3d56f20), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214487 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fb03e000/0x0/0x4ffc00000, data 0xd5c842/0xe4e000, compress 0x0/0x0/0x0, omap 0x190e0, meta 0x3d56f20), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 156 handle_osd_map epochs [157,157], i have 157, src has [1,157]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 2260992 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 2260992 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 2260992 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 2260992 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217965 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fb039000/0x0/0x4ffc00000, data 0xd5e447/0xe51000, compress 0x0/0x0/0x0, omap 0x1935e, meta 0x3d56ca2), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.289826393s of 10.338050842s, submitted: 31
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 2260992 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 2260992 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fb039000/0x0/0x4ffc00000, data 0xd5e447/0xe51000, compress 0x0/0x0/0x0, omap 0x1935e, meta 0x3d56ca2), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 2260992 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fb039000/0x0/0x4ffc00000, data 0xd5e447/0xe51000, compress 0x0/0x0/0x0, omap 0x1935e, meta 0x3d56ca2), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 2260992 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 2260992 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1222287 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fb03a000/0x0/0x4ffc00000, data 0xd5e4e2/0xe52000, compress 0x0/0x0/0x0, omap 0x1935e, meta 0x3d56ca2), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd5ff61/0xe55000, compress 0x0/0x0/0x0, omap 0x19674, meta 0x3d5698c), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223979 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fb034000/0x0/0x4ffc00000, data 0xd5fffc/0xe56000, compress 0x0/0x0/0x0, omap 0x19674, meta 0x3d5698c), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fb034000/0x0/0x4ffc00000, data 0xd5fffc/0xe56000, compress 0x0/0x0/0x0, omap 0x19674, meta 0x3d5698c), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.204831123s of 11.530242920s, submitted: 36
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd60097/0xe57000, compress 0x0/0x0/0x0, omap 0x19674, meta 0x3d5698c), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225925 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 2236416 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 2236416 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 159 heartbeat osd_stat(store_statfs(0x4fb034000/0x0/0x4ffc00000, data 0xd61b66/0xe58000, compress 0x0/0x0/0x0, omap 0x198f2, meta 0x3d5670e), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 2236416 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226641 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 2236416 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 159 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd61acb/0xe57000, compress 0x0/0x0/0x0, omap 0x198f2, meta 0x3d5670e), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 2236416 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 2236416 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 2236416 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 2236416 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226641 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 159 handle_osd_map epochs [159,160], i have 159, src has [1,160]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.297651291s of 13.383323669s, submitted: 59
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 2236416 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 160 handle_osd_map epochs [160,160], i have 160, src has [1,160]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229975 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229975 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229975 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229975 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229975 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229975 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229975 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 38.247886658s of 39.155723572s, submitted: 13
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1230119 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1230119 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229415 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb032000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.258265495s of 12.265155792s, submitted: 3
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb02d000/0x0/0x4ffc00000, data 0xd6514f/0xe5d000, compress 0x0/0x0/0x0, omap 0x19eee, meta 0x3d56112), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234729 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb02e000/0x0/0x4ffc00000, data 0xd651ea/0xe5e000, compress 0x0/0x0/0x0, omap 0x19eee, meta 0x3d56112), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb02e000/0x0/0x4ffc00000, data 0xd651ea/0xe5e000, compress 0x0/0x0/0x0, omap 0x19eee, meta 0x3d56112), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236625 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02a000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236625 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02a000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.053594589s of 16.302835464s, submitted: 44
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103653376 unmapped: 2203648 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Got map version 16
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103784448 unmapped: 2072576 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb029000/0x0/0x4ffc00000, data 0xd66ee7/0xe63000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103784448 unmapped: 2072576 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240981 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Got map version 17
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103792640 unmapped: 2064384 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103792640 unmapped: 2064384 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 61.749771118s of 62.368705750s, submitted: 11
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238939 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 162 ms_handle_reset con 0x55c0a450f000 session 0x55c0a3694a80
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 162 ms_handle_reset con 0x55c0a3655400 session 0x55c0a6381500
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104071168 unmapped: 1785856 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Got map version 18
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238635 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238779 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.142169952s of 11.714550018s, submitted: 184
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fb027000/0x0/0x4ffc00000, data 0xd687d3/0xe63000, compress 0x0/0x0/0x0, omap 0x1a482, meta 0x3d55b7e), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242433 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fb027000/0x0/0x4ffc00000, data 0xd687d3/0xe63000, compress 0x0/0x0/0x0, omap 0x1a482, meta 0x3d55b7e), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 163 handle_osd_map epochs [163,164], i have 163, src has [1,164]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb027000/0x0/0x4ffc00000, data 0xd687d3/0xe63000, compress 0x0/0x0/0x0, omap 0x1a482, meta 0x3d55b7e), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244903 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb024000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244903 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb024000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.232194901s of 18.285558701s, submitted: 52
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb024000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245047 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb024000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb024000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb024000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245047 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb024000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244343 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb026000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb026000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.992785454s of 15.000616074s, submitted: 4
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244487 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb026000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244471 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104013824 unmapped: 1843200 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb026000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb00b000/0x0/0x4ffc00000, data 0xd84fc5/0xe81000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104013824 unmapped: 1843200 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104013824 unmapped: 1843200 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104013824 unmapped: 1843200 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fafe9000/0x0/0x4ffc00000, data 0xda606f/0xea3000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.500107765s of 10.000913620s, submitted: 13
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104177664 unmapped: 1679360 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252421 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104243200 unmapped: 1613824 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104243200 unmapped: 1613824 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 1417216 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fafe2000/0x0/0x4ffc00000, data 0xdacba7/0xeaa000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 1417216 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104448000 unmapped: 1409024 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259093 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104521728 unmapped: 1335296 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104873984 unmapped: 2031616 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104873984 unmapped: 2031616 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4faf67000/0x0/0x4ffc00000, data 0xe28f92/0xf25000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104873984 unmapped: 2031616 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.939127922s of 10.002140999s, submitted: 20
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105013248 unmapped: 1892352 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4faf67000/0x0/0x4ffc00000, data 0xe28f92/0xf25000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255505 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105046016 unmapped: 1859584 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105046016 unmapped: 1859584 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105111552 unmapped: 1794048 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105111552 unmapped: 1794048 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 2793472 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260241 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4faeee000/0x0/0x4ffc00000, data 0xea172e/0xf9e000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105422848 unmapped: 2531328 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105422848 unmapped: 2531328 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105644032 unmapped: 2310144 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105644032 unmapped: 2310144 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.055247307s of 10.003301620s, submitted: 24
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 164 handle_osd_map epochs [164,165], i have 164, src has [1,165]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105062400 unmapped: 2891776 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264103 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105127936 unmapped: 2826240 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 165 heartbeat osd_stat(store_statfs(0x4faea9000/0x0/0x4ffc00000, data 0xee3272/0xfe1000, compress 0x0/0x0/0x0, omap 0x1a9e1, meta 0x3d5561f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105127936 unmapped: 2826240 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 165 heartbeat osd_stat(store_statfs(0x4faea9000/0x0/0x4ffc00000, data 0xee3272/0xfe1000, compress 0x0/0x0/0x0, omap 0x1a9e1, meta 0x3d5561f), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105127936 unmapped: 2826240 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105127936 unmapped: 2826240 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 2727936 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105259008 unmapped: 2695168 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105259008 unmapped: 2695168 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: do_command 'config diff' '{prefix=config diff}'
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: do_command 'config show' '{prefix=config show}'
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105717760 unmapped: 2236416 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: do_command 'counter dump' '{prefix=counter dump}'
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: do_command 'counter schema' '{prefix=counter schema}'
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105676800 unmapped: 3325952 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105701376 unmapped: 3301376 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec  4 05:52:57 np0005545273 ceph-osd[88205]: do_command 'log dump' '{prefix=log dump}'
Dec  4 05:52:57 np0005545273 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  4 05:52:57 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14584 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 05:52:57 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.jnsliu", "name": "rgw_frontends"} v 0)
Dec  4 05:52:57 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.jnsliu", "name": "rgw_frontends"} : dispatch
Dec  4 05:52:57 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec  4 05:52:57 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2183430560' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec  4 05:52:57 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14588 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  4 05:52:57 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.jnsliu", "name": "rgw_frontends"} v 0)
Dec  4 05:52:57 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.jnsliu", "name": "rgw_frontends"} : dispatch
Dec  4 05:52:58 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec  4 05:52:58 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2895689020' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec  4 05:52:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:52:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f8413a03400>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f8435ce5940>)]
Dec  4 05:52:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec  4 05:52:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec  4 05:52:58 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14592 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 05:52:58 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1312: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:52:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:52:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:52:58 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec  4 05:52:58 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3877293756' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec  4 05:52:58 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14596 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  4 05:52:58 np0005545273 podman[263093]: 2025-12-04 10:52:58.952118649 +0000 UTC m=+0.054091893 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec  4 05:52:58 np0005545273 podman[263092]: 2025-12-04 10:52:58.974384297 +0000 UTC m=+0.079789635 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible)
Dec  4 05:52:59 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14600 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 05:52:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  4 05:52:59 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/586584895' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec  4 05:52:59 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:52:59 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f84183ed160>)]
Dec  4 05:52:59 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec  4 05:52:59 np0005545273 nova_compute[244644]: 2025-12-04 10:52:59.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:52:59 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14602 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  4 05:52:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec  4 05:52:59 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2340669965' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec  4 05:52:59 np0005545273 ceph-mon[75358]: log_channel(cluster) log [DBG] : mgrmap e19: compute-0.iwufnj(active, since 38m)
Dec  4 05:53:00 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14606 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  4 05:53:00 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Dec  4 05:53:00 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/268822331' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Dec  4 05:53:00 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1313: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:53:00 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14610 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  4 05:53:00 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14614 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  4 05:53:00 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  4 05:53:00 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T10:53:00.942+0000 7f8454576640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  4 05:53:01 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Dec  4 05:53:01 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1062305256' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79405056 unmapped: 1327104 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79413248 unmapped: 1318912 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79413248 unmapped: 1318912 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933851 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79413248 unmapped: 1318912 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 1294336 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79437824 unmapped: 1294336 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.928408623s of 10.974489212s, submitted: 10
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 1277952 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 1277952 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943505 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 1261568 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79478784 unmapped: 1253376 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79478784 unmapped: 1253376 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.f scrub starts
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.f scrub ok
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 1245184 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 1245184 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948331 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.b scrub starts
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.b scrub ok
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79495168 unmapped: 1236992 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79495168 unmapped: 1236992 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79503360 unmapped: 1228800 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79503360 unmapped: 1228800 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79503360 unmapped: 1228800 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955570 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79503360 unmapped: 1228800 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79511552 unmapped: 1220608 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79511552 unmapped: 1220608 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79519744 unmapped: 1212416 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79519744 unmapped: 1212416 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955570 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.781251907s of 17.896800995s, submitted: 14
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79527936 unmapped: 1204224 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79527936 unmapped: 1204224 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79527936 unmapped: 1204224 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79536128 unmapped: 1196032 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79536128 unmapped: 1196032 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965230 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79552512 unmapped: 1179648 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79552512 unmapped: 1179648 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 1171456 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 1171456 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79560704 unmapped: 1171456 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 967645 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 1163264 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.870033264s of 10.907306671s, submitted: 12
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 1155072 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 1146880 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79585280 unmapped: 1146880 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79601664 unmapped: 1130496 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974888 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79601664 unmapped: 1130496 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79601664 unmapped: 1130496 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79593472 unmapped: 1138688 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79618048 unmapped: 1114112 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 1105920 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982123 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79626240 unmapped: 1105920 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.a scrub starts
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.a scrub ok
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79634432 unmapped: 1097728 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.865514755s of 10.889443398s, submitted: 12
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79634432 unmapped: 1097728 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 1089536 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 1089536 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989358 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1081344 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 1089536 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79642624 unmapped: 1089536 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1081344 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 1081344 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991771 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1073152 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79659008 unmapped: 1073152 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1064960 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.867640495s of 10.956263542s, submitted: 8
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1064960 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1064960 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 1056768 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 1056768 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 1048576 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 1048576 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79683584 unmapped: 1048576 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 1040384 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 1040384 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 1032192 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 1032192 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 1024000 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 1015808 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 1015808 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 1007616 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 1007616 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 1007616 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 999424 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 999424 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 991232 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 991232 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 983040 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 983040 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 983040 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 974848 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 974848 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79757312 unmapped: 974848 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79765504 unmapped: 966656 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79765504 unmapped: 966656 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 958464 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79773696 unmapped: 958464 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 950272 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79781888 unmapped: 950272 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 942080 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 942080 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79790080 unmapped: 942080 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 933888 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79798272 unmapped: 933888 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79806464 unmapped: 925696 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79806464 unmapped: 925696 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79806464 unmapped: 925696 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 917504 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 917504 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 909312 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 909312 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 909312 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79831040 unmapped: 901120 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79831040 unmapped: 901120 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79831040 unmapped: 901120 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 892928 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79839232 unmapped: 892928 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79847424 unmapped: 884736 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 876544 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 868352 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 868352 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 860160 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 860160 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79872000 unmapped: 860160 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79880192 unmapped: 851968 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79880192 unmapped: 851968 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79880192 unmapped: 851968 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 843776 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79888384 unmapped: 843776 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79896576 unmapped: 835584 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79896576 unmapped: 835584 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79896576 unmapped: 835584 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 827392 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 827392 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 819200 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 819200 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79921152 unmapped: 811008 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79921152 unmapped: 811008 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79921152 unmapped: 811008 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 794624 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79937536 unmapped: 794624 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79945728 unmapped: 786432 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79945728 unmapped: 786432 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79945728 unmapped: 786432 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 778240 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 778240 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 770048 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 770048 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 761856 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 761856 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 761856 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 753664 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 753664 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 753664 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 745472 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 745472 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 737280 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 737280 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [1])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 729088 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 729088 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 729088 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 720896 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 720896 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 712704 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 712704 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 712704 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 704512 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 704512 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 704512 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 696320 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 696320 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 696320 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 688128 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 688128 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 679936 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 679936 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 671744 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 671744 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 671744 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 663552 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 663552 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 655360 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 655360 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 655360 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 647168 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 647168 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 638976 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 638976 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 638976 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 630784 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 630784 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80109568 unmapped: 622592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80109568 unmapped: 622592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80109568 unmapped: 622592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 614400 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 614400 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 606208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 606208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 606208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80134144 unmapped: 598016 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80134144 unmapped: 598016 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 589824 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 589824 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 589824 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 581632 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80150528 unmapped: 581632 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 573440 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80158720 unmapped: 573440 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 565248 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 565248 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 557056 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 557056 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 557056 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 548864 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 548864 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 548864 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 540672 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 540672 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80199680 unmapped: 532480 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80199680 unmapped: 532480 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80199680 unmapped: 532480 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 524288 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 524288 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 516096 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 516096 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80216064 unmapped: 516096 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80224256 unmapped: 507904 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80224256 unmapped: 507904 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 499712 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 499712 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80232448 unmapped: 499712 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 491520 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 491520 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 483328 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80248832 unmapped: 483328 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80257024 unmapped: 475136 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80257024 unmapped: 475136 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80257024 unmapped: 475136 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 466944 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80265216 unmapped: 466944 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80273408 unmapped: 458752 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80273408 unmapped: 458752 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80273408 unmapped: 458752 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 450560 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 450560 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 442368 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 442368 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 442368 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80297984 unmapped: 434176 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80297984 unmapped: 434176 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80306176 unmapped: 425984 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80306176 unmapped: 425984 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80306176 unmapped: 425984 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 417792 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80314368 unmapped: 417792 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 409600 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 409600 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 401408 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 401408 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 401408 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 393216 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 393216 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 385024 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 385024 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 376832 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 376832 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 376832 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 368640 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 368640 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 360448 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 360448 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 360448 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 352256 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 352256 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 344064 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 344064 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 335872 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 335872 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 335872 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80404480 unmapped: 327680 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 601.0 total, 600.0 interval#012Cumulative writes: 6918 writes, 28K keys, 6918 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6918 writes, 1283 syncs, 5.39 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6918 writes, 28K keys, 6918 commit groups, 1.0 writes per commit group, ingest: 19.58 MB, 0.03 MB/s#012Interval WAL: 6918 writes, 1283 syncs, 5.39 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.181       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.181       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.18              0.00         1    0.181       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 245760 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80494592 unmapped: 237568 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80494592 unmapped: 237568 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80494592 unmapped: 237568 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 221184 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 221184 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 212992 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 212992 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 212992 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 204800 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 204800 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80535552 unmapped: 196608 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80535552 unmapped: 196608 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 188416 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 188416 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 180224 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 180224 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 180224 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 180224 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 172032 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 172032 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 163840 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 163840 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80576512 unmapped: 155648 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80576512 unmapped: 155648 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 147456 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 147456 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 147456 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 139264 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 139264 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80592896 unmapped: 139264 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80601088 unmapped: 131072 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80601088 unmapped: 131072 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80609280 unmapped: 122880 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80609280 unmapped: 122880 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80617472 unmapped: 114688 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80617472 unmapped: 114688 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80625664 unmapped: 106496 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80625664 unmapped: 106496 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80625664 unmapped: 106496 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80633856 unmapped: 98304 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80633856 unmapped: 98304 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80642048 unmapped: 90112 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80642048 unmapped: 90112 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80642048 unmapped: 90112 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80650240 unmapped: 81920 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80650240 unmapped: 81920 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80658432 unmapped: 73728 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80658432 unmapped: 73728 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80658432 unmapped: 73728 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80666624 unmapped: 65536 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80666624 unmapped: 65536 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80674816 unmapped: 57344 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80674816 unmapped: 57344 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80674816 unmapped: 57344 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80683008 unmapped: 49152 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80683008 unmapped: 49152 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80691200 unmapped: 40960 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80683008 unmapped: 49152 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80683008 unmapped: 49152 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80691200 unmapped: 40960 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80691200 unmapped: 40960 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 32768 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 32768 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 32768 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80707584 unmapped: 24576 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80707584 unmapped: 24576 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 16384 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 8192 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 8192 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 0 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 0 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1040384 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1040384 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1040384 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 1032192 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 1032192 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 1024000 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 1024000 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 1024000 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80764928 unmapped: 1015808 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80764928 unmapped: 1015808 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80764928 unmapped: 1015808 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 1007616 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 299.083770752s of 299.087066650s, submitted: 2
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 1007616 heap: 81780736 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 974848 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 974848 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 966656 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 966656 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 966656 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 966656 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 966656 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 966656 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 966656 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 966656 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 958464 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 958464 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 950272 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 950272 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 950272 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 942080 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 942080 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 933888 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 933888 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 925696 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 925696 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 917504 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 917504 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 909312 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 909312 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 909312 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 901120 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 901120 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 843776 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 835584 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 ms_handle_reset con 0x5590067fb800 session 0x559004f09340
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 ms_handle_reset con 0x5590071f1800 session 0x5590071bafc0
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 300.065643311s of 300.198425293s, submitted: 90
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread fragmentation_score=0.000141 took=0.000037s
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1201.0 total, 600.0 interval#012Cumulative writes: 7142 writes, 28K keys, 7142 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 7142 writes, 1395 syncs, 5.12 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.181       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.181       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.18              0.00         1    0.181       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1201.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1201.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1201.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 786432 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 786432 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 786432 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 786432 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 299.836059570s of 299.876525879s, submitted: 22
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 786432 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 117 handle_osd_map epochs [118,118], i have 117, src has [1,118]
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 116.996520996s of 117.133117676s, submitted: 90
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 753664 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce52000/0x0/0x4ffc00000, data 0x11abd4/0x1d8000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 745472 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 119 handle_osd_map epochs [120,120], i have 119, src has [1,120]
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 17481728 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 120 ms_handle_reset con 0x559008cfc000 session 0x559009955340
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 90808320 unmapped: 8806400 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136303 data_alloc: 218103808 data_used: 5976
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82518016 unmapped: 17096704 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 120 handle_osd_map epochs [120,121], i have 120, src has [1,121]
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 121 ms_handle_reset con 0x559008d7f400 session 0x559008e3d880
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 121 heartbeat osd_stat(store_statfs(0x4fb64b000/0x0/0x4ffc00000, data 0x191e3ca/0x19e1000, compress 0x0/0x0/0x0, omap 0x106c6, meta 0x2bbf93a), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 17080320 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 17080320 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 17080320 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 121 heartbeat osd_stat(store_statfs(0x4fb645000/0x0/0x4ffc00000, data 0x191ffa5/0x19e5000, compress 0x0/0x0/0x0, omap 0x1099d, meta 0x2bbf663), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 17080320 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 121 heartbeat osd_stat(store_statfs(0x4fb645000/0x0/0x4ffc00000, data 0x191ffa5/0x19e5000, compress 0x0/0x0/0x0, omap 0x1099d, meta 0x2bbf663), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141665 data_alloc: 218103808 data_used: 6561
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 17080320 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 121 heartbeat osd_stat(store_statfs(0x4fb645000/0x0/0x4ffc00000, data 0x191ffa5/0x19e5000, compress 0x0/0x0/0x0, omap 0x1099d, meta 0x2bbf663), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 17080320 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb642000/0x0/0x4ffc00000, data 0x1921a24/0x19e8000, compress 0x0/0x0/0x0, omap 0x10c59, meta 0x2bbf3a7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 17080320 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb642000/0x0/0x4ffc00000, data 0x1921a24/0x19e8000, compress 0x0/0x0/0x0, omap 0x10c59, meta 0x2bbf3a7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144007 data_alloc: 218103808 data_used: 6561
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb642000/0x0/0x4ffc00000, data 0x1921a24/0x19e8000, compress 0x0/0x0/0x0, omap 0x10c59, meta 0x2bbf3a7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144007 data_alloc: 218103808 data_used: 6561
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb642000/0x0/0x4ffc00000, data 0x1921a24/0x19e8000, compress 0x0/0x0/0x0, omap 0x10c59, meta 0x2bbf3a7), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.466564178s of 22.644350052s, submitted: 41
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: mgrc handle_mgr_map Got map version 10
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1143287 data_alloc: 218103808 data_used: 6561
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 16195584 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb644000/0x0/0x4ffc00000, data 0x1921a24/0x19e8000, compress 0x0/0x0/0x0, omap 0x111b8, meta 0x2bbee48), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 16195584 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb644000/0x0/0x4ffc00000, data 0x1921a24/0x19e8000, compress 0x0/0x0/0x0, omap 0x111b8, meta 0x2bbee48), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 16195584 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 16064512 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 16064512 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144979 data_alloc: 218103808 data_used: 6561
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 15015936 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb643000/0x0/0x4ffc00000, data 0x1921abf/0x19e9000, compress 0x0/0x0/0x0, omap 0x11671, meta 0x2bbe98f), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 15015936 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 15015936 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 15015936 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb643000/0x0/0x4ffc00000, data 0x1921abf/0x19e9000, compress 0x0/0x0/0x0, omap 0x116d5, meta 0x2bbe92b), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.574191093s of 10.002868652s, submitted: 9
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 15015936 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: mgrc handle_mgr_map Got map version 11
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149895 data_alloc: 218103808 data_used: 6561
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 15007744 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 15007744 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb641000/0x0/0x4ffc00000, data 0x1921c53/0x19ea000, compress 0x0/0x0/0x0, omap 0x11be1, meta 0x2bbe41f), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 122 handle_osd_map epochs [122,123], i have 123, src has [1,123]
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 15007744 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 15007744 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 14999552 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 123 heartbeat osd_stat(store_statfs(0x4fb63b000/0x0/0x4ffc00000, data 0x19239f3/0x19ef000, compress 0x0/0x0/0x0, omap 0x1244f, meta 0x2bbdbb1), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153339 data_alloc: 218103808 data_used: 6561
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 14999552 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 14999552 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 123 heartbeat osd_stat(store_statfs(0x4fb63d000/0x0/0x4ffc00000, data 0x1923a58/0x19ef000, compress 0x0/0x0/0x0, omap 0x125d3, meta 0x2bbda2d), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 14991360 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 14991360 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.307727814s of 10.003003120s, submitted: 55
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 14991360 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157951 data_alloc: 218103808 data_used: 6561
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 14974976 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 14974976 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fb638000/0x0/0x4ffc00000, data 0x19255a1/0x19f2000, compress 0x0/0x0/0x0, omap 0x13360, meta 0x2bbcca0), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 14974976 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 14974976 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 14974976 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1156927 data_alloc: 218103808 data_used: 6561
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 14974976 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fb63a000/0x0/0x4ffc00000, data 0x192566b/0x19f2000, compress 0x0/0x0/0x0, omap 0x13585, meta 0x2bbca7b), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 14974976 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 14950400 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 14925824 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 14925824 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fb63a000/0x0/0x4ffc00000, data 0x1925735/0x19f2000, compress 0x0/0x0/0x0, omap 0x13a6f, meta 0x2bbc591), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1158173 data_alloc: 218103808 data_used: 6561
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 14925824 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 14925824 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.946354866s of 13.003514290s, submitted: 32
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 14917632 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fb63a000/0x0/0x4ffc00000, data 0x1925735/0x19f2000, compress 0x0/0x0/0x0, omap 0x13b4d, meta 0x2bbc4b3), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 14917632 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 14917632 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157455 data_alloc: 218103808 data_used: 6561
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fb63a000/0x0/0x4ffc00000, data 0x1925735/0x19f2000, compress 0x0/0x0/0x0, omap 0x13d95, meta 0x2bbc26b), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84738048 unmapped: 14876672 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84738048 unmapped: 14876672 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 14991360 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fb639000/0x0/0x4ffc00000, data 0x192589a/0x19f3000, compress 0x0/0x0/0x0, omap 0x13f02, meta 0x2bbc0fe), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 14991360 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 14991360 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1158813 data_alloc: 218103808 data_used: 6561
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 14983168 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 14983168 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.955944061s of 10.002535820s, submitted: 20
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fb63c000/0x0/0x4ffc00000, data 0x192585d/0x19f0000, compress 0x0/0x0/0x0, omap 0x14225, meta 0x2bbbddb), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 14966784 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 14966784 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 14966784 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163967 data_alloc: 218103808 data_used: 6561
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 14966784 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 14966784 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fb638000/0x0/0x4ffc00000, data 0x19275c7/0x19f4000, compress 0x0/0x0/0x0, omap 0x149f5, meta 0x2bbb60b), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 14934016 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 14934016 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 14934016 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 125 handle_osd_map epochs [125,126], i have 126, src has [1,126]
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb639000/0x0/0x4ffc00000, data 0x19275f6/0x19f3000, compress 0x0/0x0/0x0, omap 0x14b15, meta 0x2bbb4eb), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165991 data_alloc: 218103808 data_used: 6561
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 14934016 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb634000/0x0/0x4ffc00000, data 0x19290da/0x19f6000, compress 0x0/0x0/0x0, omap 0x14d43, meta 0x2bbb2bd), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 14934016 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.678595543s of 10.002803802s, submitted: 82
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 14934016 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 14934016 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb636000/0x0/0x4ffc00000, data 0x1929209/0x19f6000, compress 0x0/0x0/0x0, omap 0x1422b, meta 0x2bbbdd5), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166245 data_alloc: 218103808 data_used: 6561
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb636000/0x0/0x4ffc00000, data 0x1929209/0x19f6000, compress 0x0/0x0/0x0, omap 0x14347, meta 0x2bbbcb9), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165655 data_alloc: 218103808 data_used: 6561
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb637000/0x0/0x4ffc00000, data 0x1929238/0x19f5000, compress 0x0/0x0/0x0, omap 0x1457f, meta 0x2bbba81), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.570578575s of 13.004203796s, submitted: 16
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165495 data_alloc: 218103808 data_used: 6561
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb637000/0x0/0x4ffc00000, data 0x1929238/0x19f5000, compress 0x0/0x0/0x0, omap 0x14627, meta 0x2bbb9d9), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 14770176 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb635000/0x0/0x4ffc00000, data 0x19293e5/0x19f7000, compress 0x0/0x0/0x0, omap 0x14747, meta 0x2bbb8b9), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: mgrc handle_mgr_map Got map version 12
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 14696448 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170315 data_alloc: 218103808 data_used: 6561
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 14688256 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb635000/0x0/0x4ffc00000, data 0x19296ba/0x19f7000, compress 0x0/0x0/0x0, omap 0x14867, meta 0x2bbb799), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 14655488 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 14655488 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fb631000/0x0/0x4ffc00000, data 0x192b35a/0x19fb000, compress 0x0/0x0/0x0, omap 0x14ad6, meta 0x2bbb52a), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 14655488 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fb631000/0x0/0x4ffc00000, data 0x192b35a/0x19fb000, compress 0x0/0x0/0x0, omap 0x14ad6, meta 0x2bbb52a), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 14655488 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.887221336s of 10.005003929s, submitted: 46
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173441 data_alloc: 218103808 data_used: 6561
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 14647296 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 14647296 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 14622720 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 128 handle_osd_map epochs [128,129], i have 128, src has [1,129]
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 129 heartbeat osd_stat(store_statfs(0x4fb62d000/0x0/0x4ffc00000, data 0x192d066/0x19fd000, compress 0x0/0x0/0x0, omap 0x14fe9, meta 0x2bbb017), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 14540800 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 14540800 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183767 data_alloc: 218103808 data_used: 6561
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85098496 unmapped: 14516224 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fb624000/0x0/0x4ffc00000, data 0x1930971/0x1a02000, compress 0x0/0x0/0x0, omap 0x15745, meta 0x2bba8bb), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 130 handle_osd_map epochs [130,131], i have 130, src has [1,131]
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 87220224 unmapped: 12394496 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 87220224 unmapped: 12394496 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 12361728 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 12345344 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fb61f000/0x0/0x4ffc00000, data 0x19360be/0x1a0b000, compress 0x0/0x0/0x0, omap 0x1637b, meta 0x2bb9c85), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.471419334s of 10.002448082s, submitted: 188
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194735 data_alloc: 218103808 data_used: 7211
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 12328960 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 12328960 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 134 handle_osd_map epochs [135,135], i have 135, src has [1,135]
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 134 handle_osd_map epochs [135,135], i have 135, src has [1,135]
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 87302144 unmapped: 12312576 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 87351296 unmapped: 12263424 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 87351296 unmapped: 12263424 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x1939a3d/0x1a12000, compress 0x0/0x0/0x0, omap 0x16e2c, meta 0x2bb91d4), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199515 data_alloc: 218103808 data_used: 7211
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.264224052s of 10.198055267s, submitted: 77
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198365 data_alloc: 218103808 data_used: 7211
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b56b/0x1a14000, compress 0x0/0x0/0x0, omap 0x1841a, meta 0x2bb7be6), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb617000/0x0/0x4ffc00000, data 0x193b606/0x1a15000, compress 0x0/0x0/0x0, omap 0x18532, meta 0x2bb7ace), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199897 data_alloc: 218103808 data_used: 7211
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb617000/0x0/0x4ffc00000, data 0x193b606/0x1a15000, compress 0x0/0x0/0x0, omap 0x18532, meta 0x2bb7ace), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb617000/0x0/0x4ffc00000, data 0x193b606/0x1a15000, compress 0x0/0x0/0x0, omap 0x18532, meta 0x2bb7ace), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.554255486s of 10.002036095s, submitted: 6
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199753 data_alloc: 218103808 data_used: 7211
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb617000/0x0/0x4ffc00000, data 0x193b66b/0x1a15000, compress 0x0/0x0/0x0, omap 0x189ae, meta 0x2bb7652), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88408064 unmapped: 11206656 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88408064 unmapped: 11206656 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b635/0x1a14000, compress 0x0/0x0/0x0, omap 0x1887a, meta 0x2bb7786), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88408064 unmapped: 11206656 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199323 data_alloc: 218103808 data_used: 7211
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88408064 unmapped: 11206656 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b635/0x1a14000, compress 0x0/0x0/0x0, omap 0x1887a, meta 0x2bb7786), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199323 data_alloc: 218103808 data_used: 7211
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b635/0x1a14000, compress 0x0/0x0/0x0, omap 0x1887a, meta 0x2bb7786), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b635/0x1a14000, compress 0x0/0x0/0x0, omap 0x1887a, meta 0x2bb7786), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.060717583s of 14.003334045s, submitted: 8
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199163 data_alloc: 218103808 data_used: 7211
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b635/0x1a14000, compress 0x0/0x0/0x0, omap 0x1ace7, meta 0x2bb5319), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b6ff/0x1a14000, compress 0x0/0x0/0x0, omap 0x1ad2e, meta 0x2bb52d2), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199163 data_alloc: 218103808 data_used: 7211
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b6ff/0x1a14000, compress 0x0/0x0/0x0, omap 0x1ad2e, meta 0x2bb52d2), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b6ff/0x1a14000, compress 0x0/0x0/0x0, omap 0x1ad2e, meta 0x2bb52d2), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b6ff/0x1a14000, compress 0x0/0x0/0x0, omap 0x1ad2e, meta 0x2bb52d2), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.796292305s of 10.001618385s, submitted: 11
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b7c9/0x1a14000, compress 0x0/0x0/0x0, omap 0x1ad2e, meta 0x2bb52d2), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199323 data_alloc: 218103808 data_used: 7211
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 136 ms_handle_reset con 0x559008d81400 session 0x55900771efc0
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 10788864 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: mgrc handle_mgr_map Got map version 13
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b7c9/0x1a14000, compress 0x0/0x0/0x0, omap 0x1ad2e, meta 0x2bb52d2), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199163 data_alloc: 218103808 data_used: 7211
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 10788864 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 10788864 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b7c9/0x1a14000, compress 0x0/0x0/0x0, omap 0x1ad2e, meta 0x2bb52d2), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88834048 unmapped: 10780672 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88842240 unmapped: 10772480 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.715125084s of 10.001788139s, submitted: 197
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb617000/0x0/0x4ffc00000, data 0x193b864/0x1a15000, compress 0x0/0x0/0x0, omap 0x1b9f0, meta 0x2bb4610), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88842240 unmapped: 10772480 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200871 data_alloc: 218103808 data_used: 7211
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88842240 unmapped: 10772480 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88850432 unmapped: 10764288 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88850432 unmapped: 10764288 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88866816 unmapped: 10747904 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb617000/0x0/0x4ffc00000, data 0x193b993/0x1a15000, compress 0x0/0x0/0x0, omap 0x1c126, meta 0x2bb3eda), peers [0,2] op hist [])
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 10731520 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200281 data_alloc: 218103808 data_used: 7211
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 10715136 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:53:01 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 10715136 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 05:56:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:56:54.926 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:56:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:56:54.926 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:56:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:56:54.926 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:56:54 np0005545273 podman[270435]: 2025-12-04 10:56:54.944939306 +0000 UTC m=+0.050561729 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125)
Dec  4 05:56:55 np0005545273 rsyslogd[1007]: imjournal: 15308 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec  4 05:56:55 np0005545273 nova_compute[244644]: 2025-12-04 10:56:55.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:56:55 np0005545273 nova_compute[244644]: 2025-12-04 10:56:55.400 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:56:55 np0005545273 nova_compute[244644]: 2025-12-04 10:56:55.401 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:56:55 np0005545273 nova_compute[244644]: 2025-12-04 10:56:55.401 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:56:55 np0005545273 nova_compute[244644]: 2025-12-04 10:56:55.401 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  4 05:56:55 np0005545273 nova_compute[244644]: 2025-12-04 10:56:55.402 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:56:55 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:56:55 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3774168624' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:56:55 np0005545273 nova_compute[244644]: 2025-12-04 10:56:55.977 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:56:56 np0005545273 nova_compute[244644]: 2025-12-04 10:56:56.146 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  4 05:56:56 np0005545273 nova_compute[244644]: 2025-12-04 10:56:56.147 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4951MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  4 05:56:56 np0005545273 nova_compute[244644]: 2025-12-04 10:56:56.148 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:56:56 np0005545273 nova_compute[244644]: 2025-12-04 10:56:56.148 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:56:56 np0005545273 nova_compute[244644]: 2025-12-04 10:56:56.308 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  4 05:56:56 np0005545273 nova_compute[244644]: 2025-12-04 10:56:56.309 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  4 05:56:56 np0005545273 nova_compute[244644]: 2025-12-04 10:56:56.370 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:56:56 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1431: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s
Dec  4 05:56:57 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:56:57 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/794570103' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:56:57 np0005545273 nova_compute[244644]: 2025-12-04 10:56:57.204 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.834s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:56:57 np0005545273 nova_compute[244644]: 2025-12-04 10:56:57.210 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  4 05:56:57 np0005545273 nova_compute[244644]: 2025-12-04 10:56:57.316 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  4 05:56:57 np0005545273 nova_compute[244644]: 2025-12-04 10:56:57.318 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  4 05:56:57 np0005545273 nova_compute[244644]: 2025-12-04 10:56:57.318 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:56:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:56:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:56:58 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1432: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 5.4 KiB/s rd, 0 B/s wr, 8 op/s
Dec  4 05:56:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:56:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:56:58 np0005545273 nova_compute[244644]: 2025-12-04 10:56:58.736 244650 DEBUG oslo_concurrency.processutils [None req-d756daf3-793b-420f-81e0-210aaa24d49d a4b8cd4cc3ed49f488aae8af8459583a 340d3ca308e046158ba89c94dd84cdec - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:56:58 np0005545273 nova_compute[244644]: 2025-12-04 10:56:58.758 244650 DEBUG oslo_concurrency.processutils [None req-d756daf3-793b-420f-81e0-210aaa24d49d a4b8cd4cc3ed49f488aae8af8459583a 340d3ca308e046158ba89c94dd84cdec - - default default] CMD "env LANG=C uptime" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:56:58 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:56:59 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Dec  4 05:56:59 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:56:59.217720) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  4 05:56:59 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Dec  4 05:56:59 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845819217811, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 1356, "num_deletes": 251, "total_data_size": 2129824, "memory_usage": 2175016, "flush_reason": "Manual Compaction"}
Dec  4 05:56:59 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Dec  4 05:56:59 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845819230361, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 1233771, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31100, "largest_seqno": 32455, "table_properties": {"data_size": 1229024, "index_size": 2143, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12499, "raw_average_key_size": 20, "raw_value_size": 1218609, "raw_average_value_size": 2014, "num_data_blocks": 98, "num_entries": 605, "num_filter_entries": 605, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764845676, "oldest_key_time": 1764845676, "file_creation_time": 1764845819, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:56:59 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 12674 microseconds, and 4283 cpu microseconds.
Dec  4 05:56:59 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  4 05:56:59 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:56:59.230403) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 1233771 bytes OK
Dec  4 05:56:59 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:56:59.230440) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Dec  4 05:56:59 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:56:59.234796) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Dec  4 05:56:59 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:56:59.234835) EVENT_LOG_v1 {"time_micros": 1764845819234828, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  4 05:56:59 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:56:59.234863) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  4 05:56:59 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 2123801, prev total WAL file size 2154749, number of live WAL files 2.
Dec  4 05:56:59 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:56:59 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:56:59.235781) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303032' seq:72057594037927935, type:22 .. '6D6772737461740031323534' seq:0, type:0; will stop at (end)
Dec  4 05:56:59 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  4 05:56:59 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(1204KB)], [65(10MB)]
Dec  4 05:56:59 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845819235827, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 11818629, "oldest_snapshot_seqno": -1}
Dec  4 05:56:59 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6324 keys, 9328499 bytes, temperature: kUnknown
Dec  4 05:56:59 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845819745000, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 9328499, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9287309, "index_size": 24248, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15877, "raw_key_size": 158763, "raw_average_key_size": 25, "raw_value_size": 9175140, "raw_average_value_size": 1450, "num_data_blocks": 993, "num_entries": 6324, "num_filter_entries": 6324, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 1764845819, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:56:59 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  4 05:56:59 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:56:59 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:56:59 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:56:59.745321) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 9328499 bytes
Dec  4 05:56:59 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:56:59.836917) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 23.2 rd, 18.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 10.1 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(17.1) write-amplify(7.6) OK, records in: 6773, records dropped: 449 output_compression: NoCompression
Dec  4 05:56:59 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:56:59.836983) EVENT_LOG_v1 {"time_micros": 1764845819836957, "job": 36, "event": "compaction_finished", "compaction_time_micros": 509251, "compaction_time_cpu_micros": 24849, "output_level": 6, "num_output_files": 1, "total_output_size": 9328499, "num_input_records": 6773, "num_output_records": 6324, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  4 05:56:59 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:56:59 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845819837766, "job": 36, "event": "table_file_deletion", "file_number": 67}
Dec  4 05:56:59 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:56:59 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845819840233, "job": 36, "event": "table_file_deletion", "file_number": 65}
Dec  4 05:56:59 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:56:59.235697) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:56:59 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:56:59.840285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:56:59 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:56:59.840293) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:56:59 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:56:59.840295) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:56:59 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:56:59.840297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:56:59 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:56:59.840299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:57:00 np0005545273 nova_compute[244644]: 2025-12-04 10:57:00.319 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:57:00 np0005545273 nova_compute[244644]: 2025-12-04 10:57:00.319 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:57:00 np0005545273 nova_compute[244644]: 2025-12-04 10:57:00.337 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:57:00 np0005545273 nova_compute[244644]: 2025-12-04 10:57:00.338 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  4 05:57:00 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1433: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 5.4 KiB/s rd, 0 B/s wr, 8 op/s
Dec  4 05:57:02 np0005545273 nova_compute[244644]: 2025-12-04 10:57:02.333 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:57:02 np0005545273 nova_compute[244644]: 2025-12-04 10:57:02.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:57:02 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1434: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 25 op/s
Dec  4 05:57:02 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:57:02 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:57:02 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:57:02 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:57:02 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:57:03 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:57:03 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:57:03 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:57:03 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:57:03 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:57:03 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:57:03 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:57:03 np0005545273 podman[270645]: 2025-12-04 10:57:03.527042207 +0000 UTC m=+0.045587877 container create 881de0ea28bc0d9c2d70603061ffaf690d25addebda1e35d8dba08a6f25c89c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_poincare, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec  4 05:57:03 np0005545273 podman[270645]: 2025-12-04 10:57:03.506018363 +0000 UTC m=+0.024563853 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:57:03 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:57:03 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:57:03 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:57:03 np0005545273 systemd[1]: Started libpod-conmon-881de0ea28bc0d9c2d70603061ffaf690d25addebda1e35d8dba08a6f25c89c3.scope.
Dec  4 05:57:03 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:57:03 np0005545273 podman[270645]: 2025-12-04 10:57:03.746773228 +0000 UTC m=+0.265318728 container init 881de0ea28bc0d9c2d70603061ffaf690d25addebda1e35d8dba08a6f25c89c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec  4 05:57:03 np0005545273 podman[270645]: 2025-12-04 10:57:03.756898605 +0000 UTC m=+0.275444085 container start 881de0ea28bc0d9c2d70603061ffaf690d25addebda1e35d8dba08a6f25c89c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_poincare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  4 05:57:03 np0005545273 podman[270645]: 2025-12-04 10:57:03.761255962 +0000 UTC m=+0.279801462 container attach 881de0ea28bc0d9c2d70603061ffaf690d25addebda1e35d8dba08a6f25c89c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_poincare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec  4 05:57:03 np0005545273 exciting_poincare[270661]: 167 167
Dec  4 05:57:03 np0005545273 systemd[1]: libpod-881de0ea28bc0d9c2d70603061ffaf690d25addebda1e35d8dba08a6f25c89c3.scope: Deactivated successfully.
Dec  4 05:57:03 np0005545273 conmon[270661]: conmon 881de0ea28bc0d9c2d70 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-881de0ea28bc0d9c2d70603061ffaf690d25addebda1e35d8dba08a6f25c89c3.scope/container/memory.events
Dec  4 05:57:03 np0005545273 podman[270645]: 2025-12-04 10:57:03.766662224 +0000 UTC m=+0.285207724 container died 881de0ea28bc0d9c2d70603061ffaf690d25addebda1e35d8dba08a6f25c89c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_poincare, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:57:03 np0005545273 systemd[1]: var-lib-containers-storage-overlay-bd946ef0108f97312de5920a46ce19684fbd64858e78635c2a4dbb644151753f-merged.mount: Deactivated successfully.
Dec  4 05:57:03 np0005545273 podman[270645]: 2025-12-04 10:57:03.809416131 +0000 UTC m=+0.327961611 container remove 881de0ea28bc0d9c2d70603061ffaf690d25addebda1e35d8dba08a6f25c89c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_poincare, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:57:03 np0005545273 systemd[1]: libpod-conmon-881de0ea28bc0d9c2d70603061ffaf690d25addebda1e35d8dba08a6f25c89c3.scope: Deactivated successfully.
Dec  4 05:57:03 np0005545273 podman[270685]: 2025-12-04 10:57:03.96944315 +0000 UTC m=+0.045868615 container create cc16d5dd35c48238f16bd4f8b309ebfb27130dc303a35d82eb2379b07d780e80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:57:04 np0005545273 systemd[1]: Started libpod-conmon-cc16d5dd35c48238f16bd4f8b309ebfb27130dc303a35d82eb2379b07d780e80.scope.
Dec  4 05:57:04 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:57:04 np0005545273 podman[270685]: 2025-12-04 10:57:03.947771409 +0000 UTC m=+0.024196894 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:57:04 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f21799bbbfa76e61045cfeebbd03ea9638240c37e7cabd93fdc068b952d1b5f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:57:04 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f21799bbbfa76e61045cfeebbd03ea9638240c37e7cabd93fdc068b952d1b5f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:57:04 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f21799bbbfa76e61045cfeebbd03ea9638240c37e7cabd93fdc068b952d1b5f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:57:04 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f21799bbbfa76e61045cfeebbd03ea9638240c37e7cabd93fdc068b952d1b5f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:57:04 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f21799bbbfa76e61045cfeebbd03ea9638240c37e7cabd93fdc068b952d1b5f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:57:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:57:04 np0005545273 podman[270685]: 2025-12-04 10:57:04.336512097 +0000 UTC m=+0.412937572 container init cc16d5dd35c48238f16bd4f8b309ebfb27130dc303a35d82eb2379b07d780e80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_borg, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  4 05:57:04 np0005545273 podman[270685]: 2025-12-04 10:57:04.350982861 +0000 UTC m=+0.427408366 container start cc16d5dd35c48238f16bd4f8b309ebfb27130dc303a35d82eb2379b07d780e80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_borg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Dec  4 05:57:04 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1435: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  4 05:57:04 np0005545273 podman[270685]: 2025-12-04 10:57:04.544222193 +0000 UTC m=+0.620647688 container attach cc16d5dd35c48238f16bd4f8b309ebfb27130dc303a35d82eb2379b07d780e80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_borg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:57:04 np0005545273 eloquent_borg[270702]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:57:04 np0005545273 eloquent_borg[270702]: --> All data devices are unavailable
Dec  4 05:57:04 np0005545273 systemd[1]: libpod-cc16d5dd35c48238f16bd4f8b309ebfb27130dc303a35d82eb2379b07d780e80.scope: Deactivated successfully.
Dec  4 05:57:04 np0005545273 podman[270685]: 2025-12-04 10:57:04.86470189 +0000 UTC m=+0.941127375 container died cc16d5dd35c48238f16bd4f8b309ebfb27130dc303a35d82eb2379b07d780e80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3)
Dec  4 05:57:05 np0005545273 systemd[1]: var-lib-containers-storage-overlay-6f21799bbbfa76e61045cfeebbd03ea9638240c37e7cabd93fdc068b952d1b5f-merged.mount: Deactivated successfully.
Dec  4 05:57:05 np0005545273 podman[270685]: 2025-12-04 10:57:05.511382444 +0000 UTC m=+1.587807909 container remove cc16d5dd35c48238f16bd4f8b309ebfb27130dc303a35d82eb2379b07d780e80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_borg, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:57:05 np0005545273 systemd[1]: libpod-conmon-cc16d5dd35c48238f16bd4f8b309ebfb27130dc303a35d82eb2379b07d780e80.scope: Deactivated successfully.
Dec  4 05:57:05 np0005545273 podman[270736]: 2025-12-04 10:57:05.587010286 +0000 UTC m=+0.065301570 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec  4 05:57:05 np0005545273 podman[270735]: 2025-12-04 10:57:05.625500128 +0000 UTC m=+0.103726811 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  4 05:57:05 np0005545273 podman[270842]: 2025-12-04 10:57:05.982982592 +0000 UTC m=+0.046103070 container create 28a2a36d26c42618e308dcdf5838b391c5ceba696d48431f23910b32a41a5c06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  4 05:57:06 np0005545273 systemd[1]: Started libpod-conmon-28a2a36d26c42618e308dcdf5838b391c5ceba696d48431f23910b32a41a5c06.scope.
Dec  4 05:57:06 np0005545273 podman[270842]: 2025-12-04 10:57:05.963132746 +0000 UTC m=+0.026253244 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:57:06 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:57:06 np0005545273 podman[270842]: 2025-12-04 10:57:06.07647665 +0000 UTC m=+0.139597158 container init 28a2a36d26c42618e308dcdf5838b391c5ceba696d48431f23910b32a41a5c06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec  4 05:57:06 np0005545273 podman[270842]: 2025-12-04 10:57:06.08497787 +0000 UTC m=+0.148098328 container start 28a2a36d26c42618e308dcdf5838b391c5ceba696d48431f23910b32a41a5c06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:57:06 np0005545273 podman[270842]: 2025-12-04 10:57:06.088565987 +0000 UTC m=+0.151686475 container attach 28a2a36d26c42618e308dcdf5838b391c5ceba696d48431f23910b32a41a5c06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_khorana, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec  4 05:57:06 np0005545273 systemd[1]: libpod-28a2a36d26c42618e308dcdf5838b391c5ceba696d48431f23910b32a41a5c06.scope: Deactivated successfully.
Dec  4 05:57:06 np0005545273 mystifying_khorana[270859]: 167 167
Dec  4 05:57:06 np0005545273 conmon[270859]: conmon 28a2a36d26c42618e308 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-28a2a36d26c42618e308dcdf5838b391c5ceba696d48431f23910b32a41a5c06.scope/container/memory.events
Dec  4 05:57:06 np0005545273 podman[270842]: 2025-12-04 10:57:06.093644222 +0000 UTC m=+0.156764680 container died 28a2a36d26c42618e308dcdf5838b391c5ceba696d48431f23910b32a41a5c06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  4 05:57:06 np0005545273 systemd[1]: var-lib-containers-storage-overlay-9deb05296c63ddd12e00a8c25ae8b4bfcaad21f18fe6d5812a40c7cbbaae95b3-merged.mount: Deactivated successfully.
Dec  4 05:57:06 np0005545273 podman[270842]: 2025-12-04 10:57:06.131504619 +0000 UTC m=+0.194625077 container remove 28a2a36d26c42618e308dcdf5838b391c5ceba696d48431f23910b32a41a5c06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec  4 05:57:06 np0005545273 systemd[1]: libpod-conmon-28a2a36d26c42618e308dcdf5838b391c5ceba696d48431f23910b32a41a5c06.scope: Deactivated successfully.
Dec  4 05:57:06 np0005545273 podman[270883]: 2025-12-04 10:57:06.304955076 +0000 UTC m=+0.047467604 container create c8a9f413c1901e3b8b0083972d9e57051c4de4d3a4bf02e3d67c754eecdb2981 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hodgkin, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True)
Dec  4 05:57:06 np0005545273 systemd[1]: Started libpod-conmon-c8a9f413c1901e3b8b0083972d9e57051c4de4d3a4bf02e3d67c754eecdb2981.scope.
Dec  4 05:57:06 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:57:06 np0005545273 podman[270883]: 2025-12-04 10:57:06.285010917 +0000 UTC m=+0.027523455 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:57:06 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20da25609c72a2455b29e5f2e2a1ceeed61f0963a33d56adc02d6d22e62b7a74/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:57:06 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20da25609c72a2455b29e5f2e2a1ceeed61f0963a33d56adc02d6d22e62b7a74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:57:06 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1436: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  4 05:57:06 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20da25609c72a2455b29e5f2e2a1ceeed61f0963a33d56adc02d6d22e62b7a74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:57:06 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20da25609c72a2455b29e5f2e2a1ceeed61f0963a33d56adc02d6d22e62b7a74/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:57:06 np0005545273 podman[270883]: 2025-12-04 10:57:06.404237237 +0000 UTC m=+0.146749775 container init c8a9f413c1901e3b8b0083972d9e57051c4de4d3a4bf02e3d67c754eecdb2981 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  4 05:57:06 np0005545273 podman[270883]: 2025-12-04 10:57:06.414270372 +0000 UTC m=+0.156782890 container start c8a9f413c1901e3b8b0083972d9e57051c4de4d3a4bf02e3d67c754eecdb2981 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hodgkin, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3)
Dec  4 05:57:06 np0005545273 podman[270883]: 2025-12-04 10:57:06.417778118 +0000 UTC m=+0.160290666 container attach c8a9f413c1901e3b8b0083972d9e57051c4de4d3a4bf02e3d67c754eecdb2981 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hodgkin, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]: {
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:    "0": [
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:        {
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            "devices": [
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "/dev/loop3"
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            ],
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            "lv_name": "ceph_lv0",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            "lv_size": "21470642176",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            "name": "ceph_lv0",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            "tags": {
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.cluster_name": "ceph",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.crush_device_class": "",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.encrypted": "0",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.objectstore": "bluestore",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.osd_id": "0",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.type": "block",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.vdo": "0",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.with_tpm": "0"
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            },
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            "type": "block",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            "vg_name": "ceph_vg0"
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:        }
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:    ],
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:    "1": [
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:        {
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            "devices": [
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "/dev/loop4"
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            ],
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            "lv_name": "ceph_lv1",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            "lv_size": "21470642176",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            "name": "ceph_lv1",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            "tags": {
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.cluster_name": "ceph",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.crush_device_class": "",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.encrypted": "0",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.objectstore": "bluestore",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.osd_id": "1",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.type": "block",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.vdo": "0",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.with_tpm": "0"
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            },
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            "type": "block",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            "vg_name": "ceph_vg1"
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:        }
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:    ],
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:    "2": [
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:        {
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            "devices": [
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "/dev/loop5"
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            ],
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            "lv_name": "ceph_lv2",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            "lv_size": "21470642176",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            "name": "ceph_lv2",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            "tags": {
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.cluster_name": "ceph",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.crush_device_class": "",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.encrypted": "0",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.objectstore": "bluestore",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.osd_id": "2",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.type": "block",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.vdo": "0",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:                "ceph.with_tpm": "0"
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            },
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            "type": "block",
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:            "vg_name": "ceph_vg2"
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:        }
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]:    ]
Dec  4 05:57:06 np0005545273 blissful_hodgkin[270900]: }
Dec  4 05:57:06 np0005545273 systemd[1]: libpod-c8a9f413c1901e3b8b0083972d9e57051c4de4d3a4bf02e3d67c754eecdb2981.scope: Deactivated successfully.
Dec  4 05:57:06 np0005545273 podman[270883]: 2025-12-04 10:57:06.73028141 +0000 UTC m=+0.472793928 container died c8a9f413c1901e3b8b0083972d9e57051c4de4d3a4bf02e3d67c754eecdb2981 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hodgkin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle)
Dec  4 05:57:06 np0005545273 systemd[1]: var-lib-containers-storage-overlay-20da25609c72a2455b29e5f2e2a1ceeed61f0963a33d56adc02d6d22e62b7a74-merged.mount: Deactivated successfully.
Dec  4 05:57:06 np0005545273 podman[270883]: 2025-12-04 10:57:06.779431624 +0000 UTC m=+0.521944142 container remove c8a9f413c1901e3b8b0083972d9e57051c4de4d3a4bf02e3d67c754eecdb2981 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec  4 05:57:06 np0005545273 systemd[1]: libpod-conmon-c8a9f413c1901e3b8b0083972d9e57051c4de4d3a4bf02e3d67c754eecdb2981.scope: Deactivated successfully.
Dec  4 05:57:07 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:57:07.105 156095 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'aa:78:67', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:d2:c7:24:ee:78'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  4 05:57:07 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:57:07.108 156095 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  4 05:57:07 np0005545273 podman[270982]: 2025-12-04 10:57:07.285905274 +0000 UTC m=+0.050331413 container create 24902f9a24f8184ceafb66b7d8e979045cc813605575efc6d7b98da26a3354eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec  4 05:57:07 np0005545273 systemd[1]: Started libpod-conmon-24902f9a24f8184ceafb66b7d8e979045cc813605575efc6d7b98da26a3354eb.scope.
Dec  4 05:57:07 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:57:07 np0005545273 podman[270982]: 2025-12-04 10:57:07.264286326 +0000 UTC m=+0.028712485 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:57:07 np0005545273 podman[270982]: 2025-12-04 10:57:07.373817487 +0000 UTC m=+0.138243626 container init 24902f9a24f8184ceafb66b7d8e979045cc813605575efc6d7b98da26a3354eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hypatia, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:57:07 np0005545273 podman[270982]: 2025-12-04 10:57:07.379501066 +0000 UTC m=+0.143927205 container start 24902f9a24f8184ceafb66b7d8e979045cc813605575efc6d7b98da26a3354eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hypatia, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  4 05:57:07 np0005545273 podman[270982]: 2025-12-04 10:57:07.383492324 +0000 UTC m=+0.147918453 container attach 24902f9a24f8184ceafb66b7d8e979045cc813605575efc6d7b98da26a3354eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hypatia, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  4 05:57:07 np0005545273 dreamy_hypatia[270999]: 167 167
Dec  4 05:57:07 np0005545273 systemd[1]: libpod-24902f9a24f8184ceafb66b7d8e979045cc813605575efc6d7b98da26a3354eb.scope: Deactivated successfully.
Dec  4 05:57:07 np0005545273 podman[270982]: 2025-12-04 10:57:07.385594665 +0000 UTC m=+0.150020794 container died 24902f9a24f8184ceafb66b7d8e979045cc813605575efc6d7b98da26a3354eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hypatia, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:57:07 np0005545273 systemd[1]: var-lib-containers-storage-overlay-cab505017658995a384ee88069a1bcda74d0154f017298532d9e319799cdb711-merged.mount: Deactivated successfully.
Dec  4 05:57:07 np0005545273 podman[270982]: 2025-12-04 10:57:07.426979259 +0000 UTC m=+0.191405378 container remove 24902f9a24f8184ceafb66b7d8e979045cc813605575efc6d7b98da26a3354eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hypatia, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:57:07 np0005545273 systemd[1]: libpod-conmon-24902f9a24f8184ceafb66b7d8e979045cc813605575efc6d7b98da26a3354eb.scope: Deactivated successfully.
Dec  4 05:57:07 np0005545273 podman[271023]: 2025-12-04 10:57:07.604059225 +0000 UTC m=+0.058044312 container create 466dd57ffba11c472e1acbb2aa0d8bab0bad52de664d031adf58fa933ad7c290 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_saha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  4 05:57:07 np0005545273 systemd[1]: Started libpod-conmon-466dd57ffba11c472e1acbb2aa0d8bab0bad52de664d031adf58fa933ad7c290.scope.
Dec  4 05:57:07 np0005545273 podman[271023]: 2025-12-04 10:57:07.571394565 +0000 UTC m=+0.025379732 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:57:07 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:57:07 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0999ac1a4e8877eb4de63dd5aa368c013dfbe454f15997e146022def777f7f4c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:57:07 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0999ac1a4e8877eb4de63dd5aa368c013dfbe454f15997e146022def777f7f4c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:57:07 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0999ac1a4e8877eb4de63dd5aa368c013dfbe454f15997e146022def777f7f4c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:57:07 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0999ac1a4e8877eb4de63dd5aa368c013dfbe454f15997e146022def777f7f4c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:57:07 np0005545273 podman[271023]: 2025-12-04 10:57:07.71698439 +0000 UTC m=+0.170969487 container init 466dd57ffba11c472e1acbb2aa0d8bab0bad52de664d031adf58fa933ad7c290 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_saha, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec  4 05:57:07 np0005545273 podman[271023]: 2025-12-04 10:57:07.730473761 +0000 UTC m=+0.184458838 container start 466dd57ffba11c472e1acbb2aa0d8bab0bad52de664d031adf58fa933ad7c290 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_saha, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:57:07 np0005545273 podman[271023]: 2025-12-04 10:57:07.733889984 +0000 UTC m=+0.187875061 container attach 466dd57ffba11c472e1acbb2aa0d8bab0bad52de664d031adf58fa933ad7c290 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Dec  4 05:57:08 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1437: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Dec  4 05:57:08 np0005545273 lvm[271118]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:57:08 np0005545273 lvm[271119]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:57:08 np0005545273 lvm[271119]: VG ceph_vg1 finished
Dec  4 05:57:08 np0005545273 lvm[271118]: VG ceph_vg0 finished
Dec  4 05:57:08 np0005545273 lvm[271121]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:57:08 np0005545273 lvm[271121]: VG ceph_vg2 finished
Dec  4 05:57:08 np0005545273 hopeful_saha[271040]: {}
Dec  4 05:57:08 np0005545273 systemd[1]: libpod-466dd57ffba11c472e1acbb2aa0d8bab0bad52de664d031adf58fa933ad7c290.scope: Deactivated successfully.
Dec  4 05:57:08 np0005545273 systemd[1]: libpod-466dd57ffba11c472e1acbb2aa0d8bab0bad52de664d031adf58fa933ad7c290.scope: Consumed 1.529s CPU time.
Dec  4 05:57:08 np0005545273 podman[271023]: 2025-12-04 10:57:08.67916978 +0000 UTC m=+1.133154867 container died 466dd57ffba11c472e1acbb2aa0d8bab0bad52de664d031adf58fa933ad7c290 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_saha, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:57:08 np0005545273 systemd[1]: var-lib-containers-storage-overlay-0999ac1a4e8877eb4de63dd5aa368c013dfbe454f15997e146022def777f7f4c-merged.mount: Deactivated successfully.
Dec  4 05:57:08 np0005545273 podman[271023]: 2025-12-04 10:57:08.731985183 +0000 UTC m=+1.185970260 container remove 466dd57ffba11c472e1acbb2aa0d8bab0bad52de664d031adf58fa933ad7c290 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:57:08 np0005545273 systemd[1]: libpod-conmon-466dd57ffba11c472e1acbb2aa0d8bab0bad52de664d031adf58fa933ad7c290.scope: Deactivated successfully.
Dec  4 05:57:08 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:57:08 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:57:08 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:57:08 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:57:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:57:10 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:57:10.109 156095 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=565580d5-3422-4e11-b563-3f1a3db67238, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  4 05:57:10 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:57:10 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:57:10 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1438: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 50 op/s
Dec  4 05:57:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  4 05:57:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1964643132' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec  4 05:57:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  4 05:57:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1964643132' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec  4 05:57:12 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1439: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 50 op/s
Dec  4 05:57:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:57:14 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1440: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 34 op/s
Dec  4 05:57:16 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1441: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:57:18 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1442: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:57:19 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:57:20 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1443: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:57:22 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1444: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:57:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:57:24 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1445: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:57:25 np0005545273 podman[271162]: 2025-12-04 10:57:25.896838767 +0000 UTC m=+0.093929871 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  4 05:57:26 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1446: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:57:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:57:26
Dec  4 05:57:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:57:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:57:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['volumes', 'backups', '.mgr', 'images', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log']
Dec  4 05:57:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:57:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:57:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:57:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:57:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:57:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:57:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:57:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:57:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:57:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:57:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:57:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:57:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:57:28 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1447: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:57:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:57:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:57:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:57:29 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:57:29 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:57:30 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1448: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:57:32 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1449: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:57:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:57:34 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1450: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:57:35 np0005545273 podman[271185]: 2025-12-04 10:57:35.958445233 +0000 UTC m=+0.055407358 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:57:35 np0005545273 podman[271184]: 2025-12-04 10:57:35.991920101 +0000 UTC m=+0.091032970 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125)
Dec  4 05:57:36 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1451: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:57:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:57:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:57:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:57:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:57:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:57:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:57:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:57:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:57:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:57:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:57:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006660929475746917 of space, bias 1.0, pg target 0.19982788427240752 quantized to 32 (current 32)
Dec  4 05:57:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:57:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0006150863533444786 of space, bias 4.0, pg target 0.7381036240133744 quantized to 16 (current 32)
Dec  4 05:57:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:57:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Dec  4 05:57:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:57:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:57:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:57:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 05:57:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:57:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:57:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:57:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 05:57:38 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1452: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:57:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:57:40 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1453: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:57:42 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1454: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:57:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:57:44 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1455: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:57:46 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1456: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:57:48 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1457: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:57:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:57:50 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1458: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:57:52 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1459: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:57:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:57:54 np0005545273 nova_compute[244644]: 2025-12-04 10:57:54.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:57:54 np0005545273 nova_compute[244644]: 2025-12-04 10:57:54.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:57:54 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1460: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:57:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:57:54.928 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:57:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:57:54.929 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:57:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:57:54.929 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:57:55 np0005545273 nova_compute[244644]: 2025-12-04 10:57:55.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:57:55 np0005545273 nova_compute[244644]: 2025-12-04 10:57:55.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  4 05:57:55 np0005545273 nova_compute[244644]: 2025-12-04 10:57:55.340 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  4 05:57:55 np0005545273 nova_compute[244644]: 2025-12-04 10:57:55.356 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  4 05:57:56 np0005545273 nova_compute[244644]: 2025-12-04 10:57:56.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:57:56 np0005545273 nova_compute[244644]: 2025-12-04 10:57:56.370 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:57:56 np0005545273 nova_compute[244644]: 2025-12-04 10:57:56.370 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:57:56 np0005545273 nova_compute[244644]: 2025-12-04 10:57:56.370 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:57:56 np0005545273 nova_compute[244644]: 2025-12-04 10:57:56.371 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  4 05:57:56 np0005545273 nova_compute[244644]: 2025-12-04 10:57:56.371 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:57:56 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1461: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:57:56 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:57:56 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2095246848' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:57:56 np0005545273 nova_compute[244644]: 2025-12-04 10:57:56.951 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:57:56 np0005545273 podman[271249]: 2025-12-04 10:57:56.951960145 +0000 UTC m=+0.058281218 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  4 05:57:57 np0005545273 nova_compute[244644]: 2025-12-04 10:57:57.114 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  4 05:57:57 np0005545273 nova_compute[244644]: 2025-12-04 10:57:57.116 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4943MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  4 05:57:57 np0005545273 nova_compute[244644]: 2025-12-04 10:57:57.116 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:57:57 np0005545273 nova_compute[244644]: 2025-12-04 10:57:57.117 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:57:57 np0005545273 nova_compute[244644]: 2025-12-04 10:57:57.194 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  4 05:57:57 np0005545273 nova_compute[244644]: 2025-12-04 10:57:57.195 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  4 05:57:57 np0005545273 nova_compute[244644]: 2025-12-04 10:57:57.211 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:57:57 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:57:57 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4132208595' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:57:57 np0005545273 nova_compute[244644]: 2025-12-04 10:57:57.784 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:57:57 np0005545273 nova_compute[244644]: 2025-12-04 10:57:57.790 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  4 05:57:57 np0005545273 nova_compute[244644]: 2025-12-04 10:57:57.807 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  4 05:57:57 np0005545273 nova_compute[244644]: 2025-12-04 10:57:57.809 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  4 05:57:57 np0005545273 nova_compute[244644]: 2025-12-04 10:57:57.809 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:57:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:57:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:57:58 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1462: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:57:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:57:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:57:58 np0005545273 nova_compute[244644]: 2025-12-04 10:57:58.805 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:57:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:57:59 np0005545273 nova_compute[244644]: 2025-12-04 10:57:59.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:57:59 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:57:59 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:58:00 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1463: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:58:01 np0005545273 nova_compute[244644]: 2025-12-04 10:58:01.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:58:01 np0005545273 nova_compute[244644]: 2025-12-04 10:58:01.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:58:01 np0005545273 nova_compute[244644]: 2025-12-04 10:58:01.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  4 05:58:02 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1464: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:58:03 np0005545273 nova_compute[244644]: 2025-12-04 10:58:03.334 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:58:03 np0005545273 nova_compute[244644]: 2025-12-04 10:58:03.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:58:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:58:04 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1465: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:58:06 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1466: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:58:06 np0005545273 podman[271294]: 2025-12-04 10:58:06.944420758 +0000 UTC m=+0.042324098 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  4 05:58:06 np0005545273 podman[271293]: 2025-12-04 10:58:06.970942807 +0000 UTC m=+0.074695580 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Dec  4 05:58:08 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1467: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:58:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:58:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:58:09 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:58:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:58:09 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:58:09 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:58:09 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:58:10 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec  4 05:58:10 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec  4 05:58:10 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:58:10 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:58:10 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:58:10 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:58:10 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:58:10 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:58:10 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:58:10 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:58:10 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:58:10 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:58:10 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:58:10 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:58:10 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1468: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:58:10 np0005545273 podman[271548]: 2025-12-04 10:58:10.497130829 +0000 UTC m=+0.027241439 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:58:10 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec  4 05:58:10 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:58:10 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:58:10 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:58:10 np0005545273 podman[271548]: 2025-12-04 10:58:10.602723254 +0000 UTC m=+0.132833844 container create 9d2de7a39ff5abb79a05a930e75039045dbeccdee6bcef1d43e678d6bd11c62f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  4 05:58:10 np0005545273 systemd[1]: Started libpod-conmon-9d2de7a39ff5abb79a05a930e75039045dbeccdee6bcef1d43e678d6bd11c62f.scope.
Dec  4 05:58:10 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:58:10 np0005545273 podman[271548]: 2025-12-04 10:58:10.814208442 +0000 UTC m=+0.344319052 container init 9d2de7a39ff5abb79a05a930e75039045dbeccdee6bcef1d43e678d6bd11c62f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  4 05:58:10 np0005545273 podman[271548]: 2025-12-04 10:58:10.822304351 +0000 UTC m=+0.352414941 container start 9d2de7a39ff5abb79a05a930e75039045dbeccdee6bcef1d43e678d6bd11c62f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:58:10 np0005545273 podman[271548]: 2025-12-04 10:58:10.827501038 +0000 UTC m=+0.357611638 container attach 9d2de7a39ff5abb79a05a930e75039045dbeccdee6bcef1d43e678d6bd11c62f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec  4 05:58:10 np0005545273 friendly_cohen[271564]: 167 167
Dec  4 05:58:10 np0005545273 systemd[1]: libpod-9d2de7a39ff5abb79a05a930e75039045dbeccdee6bcef1d43e678d6bd11c62f.scope: Deactivated successfully.
Dec  4 05:58:10 np0005545273 podman[271548]: 2025-12-04 10:58:10.82964923 +0000 UTC m=+0.359759820 container died 9d2de7a39ff5abb79a05a930e75039045dbeccdee6bcef1d43e678d6bd11c62f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:58:10 np0005545273 systemd[1]: var-lib-containers-storage-overlay-d17b1939c06d86a71129cbab6acb03b13380ce2a0367bbe16d4bc17c34f312e6-merged.mount: Deactivated successfully.
Dec  4 05:58:10 np0005545273 podman[271548]: 2025-12-04 10:58:10.872046478 +0000 UTC m=+0.402157068 container remove 9d2de7a39ff5abb79a05a930e75039045dbeccdee6bcef1d43e678d6bd11c62f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:58:10 np0005545273 systemd[1]: libpod-conmon-9d2de7a39ff5abb79a05a930e75039045dbeccdee6bcef1d43e678d6bd11c62f.scope: Deactivated successfully.
Dec  4 05:58:11 np0005545273 podman[271589]: 2025-12-04 10:58:11.026034119 +0000 UTC m=+0.034416233 container create 103b8621238a69dfa3643517ab129bd335860b51be1b7935ada41754938c641a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_haslett, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec  4 05:58:11 np0005545273 systemd[1]: Started libpod-conmon-103b8621238a69dfa3643517ab129bd335860b51be1b7935ada41754938c641a.scope.
Dec  4 05:58:11 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:58:11 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4399928b69a29cd636ce5303f115eb6e8195cf26311d32ca0113370805936031/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:58:11 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4399928b69a29cd636ce5303f115eb6e8195cf26311d32ca0113370805936031/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:58:11 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4399928b69a29cd636ce5303f115eb6e8195cf26311d32ca0113370805936031/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:58:11 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4399928b69a29cd636ce5303f115eb6e8195cf26311d32ca0113370805936031/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:58:11 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4399928b69a29cd636ce5303f115eb6e8195cf26311d32ca0113370805936031/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:58:11 np0005545273 podman[271589]: 2025-12-04 10:58:11.010022507 +0000 UTC m=+0.018404641 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:58:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  4 05:58:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3160988379' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec  4 05:58:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  4 05:58:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3160988379' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec  4 05:58:11 np0005545273 podman[271589]: 2025-12-04 10:58:11.768777716 +0000 UTC m=+0.777159860 container init 103b8621238a69dfa3643517ab129bd335860b51be1b7935ada41754938c641a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_haslett, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:58:11 np0005545273 podman[271589]: 2025-12-04 10:58:11.777060769 +0000 UTC m=+0.785442883 container start 103b8621238a69dfa3643517ab129bd335860b51be1b7935ada41754938c641a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_haslett, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:58:12 np0005545273 podman[271589]: 2025-12-04 10:58:12.100042867 +0000 UTC m=+1.108425121 container attach 103b8621238a69dfa3643517ab129bd335860b51be1b7935ada41754938c641a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec  4 05:58:12 np0005545273 wonderful_haslett[271605]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:58:12 np0005545273 wonderful_haslett[271605]: --> All data devices are unavailable
Dec  4 05:58:12 np0005545273 systemd[1]: libpod-103b8621238a69dfa3643517ab129bd335860b51be1b7935ada41754938c641a.scope: Deactivated successfully.
Dec  4 05:58:12 np0005545273 podman[271589]: 2025-12-04 10:58:12.291453614 +0000 UTC m=+1.299835728 container died 103b8621238a69dfa3643517ab129bd335860b51be1b7935ada41754938c641a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_haslett, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:58:12 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1469: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:58:12 np0005545273 systemd[1]: var-lib-containers-storage-overlay-4399928b69a29cd636ce5303f115eb6e8195cf26311d32ca0113370805936031-merged.mount: Deactivated successfully.
Dec  4 05:58:12 np0005545273 podman[271589]: 2025-12-04 10:58:12.55147222 +0000 UTC m=+1.559854334 container remove 103b8621238a69dfa3643517ab129bd335860b51be1b7935ada41754938c641a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_haslett, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:58:12 np0005545273 systemd[1]: libpod-conmon-103b8621238a69dfa3643517ab129bd335860b51be1b7935ada41754938c641a.scope: Deactivated successfully.
Dec  4 05:58:13 np0005545273 podman[271701]: 2025-12-04 10:58:13.084397419 +0000 UTC m=+0.095837427 container create 5f3853bae649e3f364a13d97d72e55459fe427854ddf0f04dfc37d4a69e98f4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:58:13 np0005545273 podman[271701]: 2025-12-04 10:58:13.010171461 +0000 UTC m=+0.021611479 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:58:13 np0005545273 systemd[1]: Started libpod-conmon-5f3853bae649e3f364a13d97d72e55459fe427854ddf0f04dfc37d4a69e98f4d.scope.
Dec  4 05:58:13 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:58:13 np0005545273 podman[271701]: 2025-12-04 10:58:13.168685074 +0000 UTC m=+0.180125092 container init 5f3853bae649e3f364a13d97d72e55459fe427854ddf0f04dfc37d4a69e98f4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_williams, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec  4 05:58:13 np0005545273 podman[271701]: 2025-12-04 10:58:13.175656594 +0000 UTC m=+0.187096592 container start 5f3853bae649e3f364a13d97d72e55459fe427854ddf0f04dfc37d4a69e98f4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:58:13 np0005545273 vibrant_williams[271717]: 167 167
Dec  4 05:58:13 np0005545273 podman[271701]: 2025-12-04 10:58:13.179867978 +0000 UTC m=+0.191307996 container attach 5f3853bae649e3f364a13d97d72e55459fe427854ddf0f04dfc37d4a69e98f4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_williams, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:58:13 np0005545273 systemd[1]: libpod-5f3853bae649e3f364a13d97d72e55459fe427854ddf0f04dfc37d4a69e98f4d.scope: Deactivated successfully.
Dec  4 05:58:13 np0005545273 podman[271701]: 2025-12-04 10:58:13.181114338 +0000 UTC m=+0.192554366 container died 5f3853bae649e3f364a13d97d72e55459fe427854ddf0f04dfc37d4a69e98f4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_williams, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:58:13 np0005545273 systemd[1]: var-lib-containers-storage-overlay-4d0363223f673a5fa286483e893a80b11bb9f0637a2e5feabad742a9cd41e8c7-merged.mount: Deactivated successfully.
Dec  4 05:58:13 np0005545273 podman[271701]: 2025-12-04 10:58:13.223315861 +0000 UTC m=+0.234755859 container remove 5f3853bae649e3f364a13d97d72e55459fe427854ddf0f04dfc37d4a69e98f4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_williams, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  4 05:58:13 np0005545273 systemd[1]: libpod-conmon-5f3853bae649e3f364a13d97d72e55459fe427854ddf0f04dfc37d4a69e98f4d.scope: Deactivated successfully.
Dec  4 05:58:13 np0005545273 podman[271740]: 2025-12-04 10:58:13.376794859 +0000 UTC m=+0.041890757 container create 0809a110cbafec18797bd6dfe9c596fdfc7199b9d5fb970cb10f56e5af30f20c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_haslett, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:58:13 np0005545273 podman[271740]: 2025-12-04 10:58:13.358559942 +0000 UTC m=+0.023655870 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:58:13 np0005545273 systemd[1]: Started libpod-conmon-0809a110cbafec18797bd6dfe9c596fdfc7199b9d5fb970cb10f56e5af30f20c.scope.
Dec  4 05:58:13 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:58:13 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/242c19e6a046f0a9fd3e2f55cfa6b9e7b97fde044863be26c932f7002dc9e956/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:58:13 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/242c19e6a046f0a9fd3e2f55cfa6b9e7b97fde044863be26c932f7002dc9e956/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:58:13 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/242c19e6a046f0a9fd3e2f55cfa6b9e7b97fde044863be26c932f7002dc9e956/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:58:13 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/242c19e6a046f0a9fd3e2f55cfa6b9e7b97fde044863be26c932f7002dc9e956/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:58:13 np0005545273 podman[271740]: 2025-12-04 10:58:13.528953545 +0000 UTC m=+0.194049463 container init 0809a110cbafec18797bd6dfe9c596fdfc7199b9d5fb970cb10f56e5af30f20c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_haslett, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:58:13 np0005545273 podman[271740]: 2025-12-04 10:58:13.537938314 +0000 UTC m=+0.203034212 container start 0809a110cbafec18797bd6dfe9c596fdfc7199b9d5fb970cb10f56e5af30f20c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_haslett, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec  4 05:58:13 np0005545273 podman[271740]: 2025-12-04 10:58:13.646512793 +0000 UTC m=+0.311608691 container attach 0809a110cbafec18797bd6dfe9c596fdfc7199b9d5fb970cb10f56e5af30f20c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_haslett, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle)
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]: {
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:    "0": [
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:        {
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            "devices": [
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "/dev/loop3"
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            ],
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            "lv_name": "ceph_lv0",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            "lv_size": "21470642176",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            "name": "ceph_lv0",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            "tags": {
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.cluster_name": "ceph",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.crush_device_class": "",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.encrypted": "0",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.objectstore": "bluestore",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.osd_id": "0",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.type": "block",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.vdo": "0",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.with_tpm": "0"
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            },
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            "type": "block",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            "vg_name": "ceph_vg0"
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:        }
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:    ],
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:    "1": [
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:        {
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            "devices": [
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "/dev/loop4"
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            ],
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            "lv_name": "ceph_lv1",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            "lv_size": "21470642176",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            "name": "ceph_lv1",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            "tags": {
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.cluster_name": "ceph",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.crush_device_class": "",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.encrypted": "0",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.objectstore": "bluestore",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.osd_id": "1",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.type": "block",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.vdo": "0",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.with_tpm": "0"
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            },
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            "type": "block",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            "vg_name": "ceph_vg1"
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:        }
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:    ],
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:    "2": [
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:        {
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            "devices": [
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "/dev/loop5"
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            ],
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            "lv_name": "ceph_lv2",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            "lv_size": "21470642176",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            "name": "ceph_lv2",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            "tags": {
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.cluster_name": "ceph",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.crush_device_class": "",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.encrypted": "0",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.objectstore": "bluestore",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.osd_id": "2",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.type": "block",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.vdo": "0",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:                "ceph.with_tpm": "0"
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            },
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            "type": "block",
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:            "vg_name": "ceph_vg2"
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:        }
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]:    ]
Dec  4 05:58:13 np0005545273 youthful_haslett[271756]: }
Dec  4 05:58:13 np0005545273 systemd[1]: libpod-0809a110cbafec18797bd6dfe9c596fdfc7199b9d5fb970cb10f56e5af30f20c.scope: Deactivated successfully.
Dec  4 05:58:13 np0005545273 podman[271740]: 2025-12-04 10:58:13.85464504 +0000 UTC m=+0.519740948 container died 0809a110cbafec18797bd6dfe9c596fdfc7199b9d5fb970cb10f56e5af30f20c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_haslett, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Dec  4 05:58:14 np0005545273 systemd[1]: var-lib-containers-storage-overlay-242c19e6a046f0a9fd3e2f55cfa6b9e7b97fde044863be26c932f7002dc9e956-merged.mount: Deactivated successfully.
Dec  4 05:58:14 np0005545273 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  4 05:58:14 np0005545273 podman[271740]: 2025-12-04 10:58:14.191164609 +0000 UTC m=+0.856260507 container remove 0809a110cbafec18797bd6dfe9c596fdfc7199b9d5fb970cb10f56e5af30f20c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec  4 05:58:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:58:14 np0005545273 systemd[1]: libpod-conmon-0809a110cbafec18797bd6dfe9c596fdfc7199b9d5fb970cb10f56e5af30f20c.scope: Deactivated successfully.
Dec  4 05:58:14 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1470: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:58:14 np0005545273 podman[271840]: 2025-12-04 10:58:14.611141083 +0000 UTC m=+0.022526303 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:58:14 np0005545273 podman[271840]: 2025-12-04 10:58:14.73109106 +0000 UTC m=+0.142476260 container create bacbc6599f940bbfb45c885c00c1a3f1668d936e2f27fef11f4d3142edcb412b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  4 05:58:14 np0005545273 systemd[1]: Started libpod-conmon-bacbc6599f940bbfb45c885c00c1a3f1668d936e2f27fef11f4d3142edcb412b.scope.
Dec  4 05:58:14 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:58:14 np0005545273 podman[271840]: 2025-12-04 10:58:14.814518122 +0000 UTC m=+0.225903342 container init bacbc6599f940bbfb45c885c00c1a3f1668d936e2f27fef11f4d3142edcb412b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_jones, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:58:14 np0005545273 podman[271840]: 2025-12-04 10:58:14.820175921 +0000 UTC m=+0.231561121 container start bacbc6599f940bbfb45c885c00c1a3f1668d936e2f27fef11f4d3142edcb412b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_jones, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:58:14 np0005545273 infallible_jones[271856]: 167 167
Dec  4 05:58:14 np0005545273 systemd[1]: libpod-bacbc6599f940bbfb45c885c00c1a3f1668d936e2f27fef11f4d3142edcb412b.scope: Deactivated successfully.
Dec  4 05:58:14 np0005545273 podman[271840]: 2025-12-04 10:58:14.874405569 +0000 UTC m=+0.285790779 container attach bacbc6599f940bbfb45c885c00c1a3f1668d936e2f27fef11f4d3142edcb412b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_jones, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec  4 05:58:14 np0005545273 podman[271840]: 2025-12-04 10:58:14.875947056 +0000 UTC m=+0.287332266 container died bacbc6599f940bbfb45c885c00c1a3f1668d936e2f27fef11f4d3142edcb412b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:58:14 np0005545273 systemd[1]: var-lib-containers-storage-overlay-bd9240817b6fa1798e21465a39bce941953aa56b49679d7df8b53ac0254c2ffa-merged.mount: Deactivated successfully.
Dec  4 05:58:15 np0005545273 podman[271840]: 2025-12-04 10:58:15.08479698 +0000 UTC m=+0.496182180 container remove bacbc6599f940bbfb45c885c00c1a3f1668d936e2f27fef11f4d3142edcb412b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:58:15 np0005545273 systemd[1]: libpod-conmon-bacbc6599f940bbfb45c885c00c1a3f1668d936e2f27fef11f4d3142edcb412b.scope: Deactivated successfully.
Dec  4 05:58:15 np0005545273 podman[271883]: 2025-12-04 10:58:15.245349362 +0000 UTC m=+0.050771524 container create f9f958c49434dbbef27af9286ebfb7bca00f65663a92e579333bd0b0fc56a3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_austin, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:58:15 np0005545273 podman[271883]: 2025-12-04 10:58:15.217138491 +0000 UTC m=+0.022560633 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:58:15 np0005545273 systemd[1]: Started libpod-conmon-f9f958c49434dbbef27af9286ebfb7bca00f65663a92e579333bd0b0fc56a3d1.scope.
Dec  4 05:58:15 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:58:15 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f264cd995f5370093cbe7e74b8a0b72a93e97d6d0fd94e027681e274d65a0dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:58:15 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f264cd995f5370093cbe7e74b8a0b72a93e97d6d0fd94e027681e274d65a0dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:58:15 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f264cd995f5370093cbe7e74b8a0b72a93e97d6d0fd94e027681e274d65a0dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:58:15 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f264cd995f5370093cbe7e74b8a0b72a93e97d6d0fd94e027681e274d65a0dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:58:15 np0005545273 podman[271883]: 2025-12-04 10:58:15.365548115 +0000 UTC m=+0.170970237 container init f9f958c49434dbbef27af9286ebfb7bca00f65663a92e579333bd0b0fc56a3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:58:15 np0005545273 podman[271883]: 2025-12-04 10:58:15.37229367 +0000 UTC m=+0.177715792 container start f9f958c49434dbbef27af9286ebfb7bca00f65663a92e579333bd0b0fc56a3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_austin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec  4 05:58:15 np0005545273 podman[271883]: 2025-12-04 10:58:15.375310614 +0000 UTC m=+0.180732756 container attach f9f958c49434dbbef27af9286ebfb7bca00f65663a92e579333bd0b0fc56a3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_austin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec  4 05:58:16 np0005545273 lvm[271978]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:58:16 np0005545273 lvm[271979]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:58:16 np0005545273 lvm[271978]: VG ceph_vg0 finished
Dec  4 05:58:16 np0005545273 lvm[271979]: VG ceph_vg1 finished
Dec  4 05:58:16 np0005545273 lvm[271981]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:58:16 np0005545273 lvm[271981]: VG ceph_vg2 finished
Dec  4 05:58:16 np0005545273 priceless_austin[271900]: {}
Dec  4 05:58:16 np0005545273 systemd[1]: libpod-f9f958c49434dbbef27af9286ebfb7bca00f65663a92e579333bd0b0fc56a3d1.scope: Deactivated successfully.
Dec  4 05:58:16 np0005545273 systemd[1]: libpod-f9f958c49434dbbef27af9286ebfb7bca00f65663a92e579333bd0b0fc56a3d1.scope: Consumed 1.363s CPU time.
Dec  4 05:58:16 np0005545273 podman[271984]: 2025-12-04 10:58:16.227824809 +0000 UTC m=+0.024011470 container died f9f958c49434dbbef27af9286ebfb7bca00f65663a92e579333bd0b0fc56a3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_austin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  4 05:58:16 np0005545273 systemd[1]: var-lib-containers-storage-overlay-4f264cd995f5370093cbe7e74b8a0b72a93e97d6d0fd94e027681e274d65a0dd-merged.mount: Deactivated successfully.
Dec  4 05:58:16 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1471: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:58:16 np0005545273 podman[271984]: 2025-12-04 10:58:16.462129416 +0000 UTC m=+0.258316077 container remove f9f958c49434dbbef27af9286ebfb7bca00f65663a92e579333bd0b0fc56a3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_austin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec  4 05:58:16 np0005545273 systemd[1]: libpod-conmon-f9f958c49434dbbef27af9286ebfb7bca00f65663a92e579333bd0b0fc56a3d1.scope: Deactivated successfully.
Dec  4 05:58:16 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:58:16 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:58:16 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:58:16 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:58:17 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:58:17 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:58:18 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1472: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:58:19 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:58:20 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1473: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:58:22 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1474: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:58:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:58:24 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Dec  4 05:58:24 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:58:24.245999) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  4 05:58:24 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Dec  4 05:58:24 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845904246062, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 949, "num_deletes": 251, "total_data_size": 1388432, "memory_usage": 1416080, "flush_reason": "Manual Compaction"}
Dec  4 05:58:24 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Dec  4 05:58:24 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1475: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:58:24 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845904467790, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 1353575, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32456, "largest_seqno": 33404, "table_properties": {"data_size": 1348795, "index_size": 2368, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10318, "raw_average_key_size": 19, "raw_value_size": 1339271, "raw_average_value_size": 2555, "num_data_blocks": 106, "num_entries": 524, "num_filter_entries": 524, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764845819, "oldest_key_time": 1764845819, "file_creation_time": 1764845904, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:58:24 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 221860 microseconds, and 4707 cpu microseconds.
Dec  4 05:58:24 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  4 05:58:24 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:58:24.467856) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 1353575 bytes OK
Dec  4 05:58:24 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:58:24.467886) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Dec  4 05:58:24 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:58:24.516395) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Dec  4 05:58:24 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:58:24.516446) EVENT_LOG_v1 {"time_micros": 1764845904516435, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  4 05:58:24 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:58:24.516473) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  4 05:58:24 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 1383842, prev total WAL file size 1383842, number of live WAL files 2.
Dec  4 05:58:24 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:58:24 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:58:24.517220) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Dec  4 05:58:24 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  4 05:58:24 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(1321KB)], [68(9109KB)]
Dec  4 05:58:24 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845904517278, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 10682074, "oldest_snapshot_seqno": -1}
Dec  4 05:58:24 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6334 keys, 8783747 bytes, temperature: kUnknown
Dec  4 05:58:24 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845904631583, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 8783747, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8742958, "index_size": 23847, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15877, "raw_key_size": 159654, "raw_average_key_size": 25, "raw_value_size": 8631010, "raw_average_value_size": 1362, "num_data_blocks": 970, "num_entries": 6334, "num_filter_entries": 6334, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 1764845904, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:58:24 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  4 05:58:24 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:58:24.631809) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 8783747 bytes
Dec  4 05:58:24 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:58:24.634073) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 93.4 rd, 76.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 8.9 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(14.4) write-amplify(6.5) OK, records in: 6848, records dropped: 514 output_compression: NoCompression
Dec  4 05:58:24 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:58:24.634088) EVENT_LOG_v1 {"time_micros": 1764845904634080, "job": 38, "event": "compaction_finished", "compaction_time_micros": 114368, "compaction_time_cpu_micros": 21709, "output_level": 6, "num_output_files": 1, "total_output_size": 8783747, "num_input_records": 6848, "num_output_records": 6334, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  4 05:58:24 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:58:24 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845904634390, "job": 38, "event": "table_file_deletion", "file_number": 70}
Dec  4 05:58:24 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:58:24 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845904635991, "job": 38, "event": "table_file_deletion", "file_number": 68}
Dec  4 05:58:24 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:58:24.517166) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:58:24 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:58:24.636125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:58:24 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:58:24.636133) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:58:24 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:58:24.636135) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:58:24 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:58:24.636137) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:58:24 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:58:24.636139) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:58:26 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1476: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:58:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:58:26
Dec  4 05:58:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:58:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:58:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'images', '.rgw.root']
Dec  4 05:58:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:58:27 np0005545273 podman[272023]: 2025-12-04 10:58:27.962957418 +0000 UTC m=+0.064680649 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  4 05:58:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:58:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:58:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:58:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:58:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:58:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:58:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:58:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:58:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:58:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:58:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:58:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:58:28 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1477: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:58:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:58:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:58:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:58:29 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:58:29 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:58:30 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1478: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:58:32 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1479: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:58:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:58:34 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1480: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:58:36 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1481: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:58:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:58:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:58:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:58:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:58:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:58:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:58:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:58:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:58:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:58:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:58:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006660929475746917 of space, bias 1.0, pg target 0.19982788427240752 quantized to 32 (current 32)
Dec  4 05:58:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:58:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0006150863533444786 of space, bias 4.0, pg target 0.7381036240133744 quantized to 16 (current 32)
Dec  4 05:58:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:58:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Dec  4 05:58:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:58:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:58:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:58:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 05:58:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:58:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:58:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:58:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 05:58:37 np0005545273 podman[272045]: 2025-12-04 10:58:37.977022902 +0000 UTC m=+0.081523483 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  4 05:58:38 np0005545273 podman[272044]: 2025-12-04 10:58:38.026141008 +0000 UTC m=+0.134485324 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  4 05:58:38 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1482: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:58:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:58:40 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1483: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:58:42 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1484: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:58:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:58:44 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1485: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:58:46 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1486: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:58:48 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1487: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:58:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:58:50 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1488: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:58:52 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1489: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:58:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:58:54 np0005545273 nova_compute[244644]: 2025-12-04 10:58:54.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:58:54 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1490: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:58:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:58:54.930 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:58:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:58:54.930 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:58:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:58:54.930 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:58:56 np0005545273 nova_compute[244644]: 2025-12-04 10:58:56.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:58:56 np0005545273 nova_compute[244644]: 2025-12-04 10:58:56.338 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  4 05:58:56 np0005545273 nova_compute[244644]: 2025-12-04 10:58:56.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  4 05:58:56 np0005545273 nova_compute[244644]: 2025-12-04 10:58:56.355 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  4 05:58:56 np0005545273 nova_compute[244644]: 2025-12-04 10:58:56.355 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:58:56 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1491: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:58:57 np0005545273 nova_compute[244644]: 2025-12-04 10:58:57.337 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:58:57 np0005545273 nova_compute[244644]: 2025-12-04 10:58:57.369 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:58:57 np0005545273 nova_compute[244644]: 2025-12-04 10:58:57.370 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:58:57 np0005545273 nova_compute[244644]: 2025-12-04 10:58:57.370 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:58:57 np0005545273 nova_compute[244644]: 2025-12-04 10:58:57.370 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  4 05:58:57 np0005545273 nova_compute[244644]: 2025-12-04 10:58:57.370 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:58:57 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:58:57 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3511190525' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:58:57 np0005545273 nova_compute[244644]: 2025-12-04 10:58:57.896 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:58:58 np0005545273 nova_compute[244644]: 2025-12-04 10:58:58.034 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  4 05:58:58 np0005545273 nova_compute[244644]: 2025-12-04 10:58:58.035 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4961MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  4 05:58:58 np0005545273 nova_compute[244644]: 2025-12-04 10:58:58.035 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:58:58 np0005545273 nova_compute[244644]: 2025-12-04 10:58:58.036 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:58:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:58:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:58:58 np0005545273 nova_compute[244644]: 2025-12-04 10:58:58.277 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  4 05:58:58 np0005545273 nova_compute[244644]: 2025-12-04 10:58:58.278 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  4 05:58:58 np0005545273 nova_compute[244644]: 2025-12-04 10:58:58.349 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Refreshing inventories for resource provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  4 05:58:58 np0005545273 nova_compute[244644]: 2025-12-04 10:58:58.424 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Updating ProviderTree inventory for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  4 05:58:58 np0005545273 nova_compute[244644]: 2025-12-04 10:58:58.425 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Updating inventory in ProviderTree for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  4 05:58:58 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1492: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:58:58 np0005545273 nova_compute[244644]: 2025-12-04 10:58:58.444 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Refreshing aggregate associations for resource provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  4 05:58:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:58:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:58:58 np0005545273 nova_compute[244644]: 2025-12-04 10:58:58.463 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Refreshing trait associations for resource provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f, traits: COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE,HW_CPU_X86_ABM,HW_CPU_X86_F16C,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AVX2,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_FMA3,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  4 05:58:58 np0005545273 nova_compute[244644]: 2025-12-04 10:58:58.482 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:58:58 np0005545273 podman[272131]: 2025-12-04 10:58:58.940209884 +0000 UTC m=+0.052007178 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  4 05:58:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 05:58:59 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/99470823' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 05:58:59 np0005545273 nova_compute[244644]: 2025-12-04 10:58:59.093 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 05:58:59 np0005545273 nova_compute[244644]: 2025-12-04 10:58:59.099 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  4 05:58:59 np0005545273 nova_compute[244644]: 2025-12-04 10:58:59.115 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  4 05:58:59 np0005545273 nova_compute[244644]: 2025-12-04 10:58:59.116 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  4 05:58:59 np0005545273 nova_compute[244644]: 2025-12-04 10:58:59.116 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.081s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:58:59 np0005545273 nova_compute[244644]: 2025-12-04 10:58:59.117 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:58:59 np0005545273 nova_compute[244644]: 2025-12-04 10:58:59.117 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  4 05:58:59 np0005545273 nova_compute[244644]: 2025-12-04 10:58:59.132 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  4 05:58:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:58:59 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:58:59 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:59:00 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1493: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:59:02 np0005545273 nova_compute[244644]: 2025-12-04 10:59:02.134 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:59:02 np0005545273 nova_compute[244644]: 2025-12-04 10:59:02.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:59:02 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1494: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:59:03 np0005545273 nova_compute[244644]: 2025-12-04 10:59:03.333 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:59:03 np0005545273 nova_compute[244644]: 2025-12-04 10:59:03.337 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:59:03 np0005545273 nova_compute[244644]: 2025-12-04 10:59:03.337 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  4 05:59:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:59:04 np0005545273 nova_compute[244644]: 2025-12-04 10:59:04.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:59:04 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1495: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:59:06 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1496: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:59:08 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1497: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:59:08 np0005545273 podman[272156]: 2025-12-04 10:59:08.94298045 +0000 UTC m=+0.045495078 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  4 05:59:08 np0005545273 podman[272155]: 2025-12-04 10:59:08.99713329 +0000 UTC m=+0.102116098 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_managed=true)
Dec  4 05:59:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:59:10 np0005545273 nova_compute[244644]: 2025-12-04 10:59:10.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:59:10 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1498: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:59:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  4 05:59:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3029874939' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec  4 05:59:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  4 05:59:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3029874939' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec  4 05:59:12 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1499: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:59:13 np0005545273 nova_compute[244644]: 2025-12-04 10:59:13.351 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:59:13 np0005545273 nova_compute[244644]: 2025-12-04 10:59:13.352 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  4 05:59:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:59:14 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1500: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:59:16 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1501: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:59:17 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:59:17 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:59:17 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 05:59:17 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:59:17 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 05:59:17 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:59:17 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 05:59:17 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 05:59:17 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 05:59:17 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:59:17 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 05:59:17 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 05:59:17 np0005545273 podman[272343]: 2025-12-04 10:59:17.818625025 +0000 UTC m=+0.049660591 container create ed2c3af69f7f20a5dde0af7cc1fd0d5dba8e3cd83de173859cadd0fed0863b23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_visvesvaraya, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 05:59:17 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 05:59:17 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:59:17 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 05:59:17 np0005545273 systemd[1]: Started libpod-conmon-ed2c3af69f7f20a5dde0af7cc1fd0d5dba8e3cd83de173859cadd0fed0863b23.scope.
Dec  4 05:59:17 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:59:17 np0005545273 podman[272343]: 2025-12-04 10:59:17.797327071 +0000 UTC m=+0.028362657 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:59:17 np0005545273 podman[272343]: 2025-12-04 10:59:17.899010719 +0000 UTC m=+0.130046305 container init ed2c3af69f7f20a5dde0af7cc1fd0d5dba8e3cd83de173859cadd0fed0863b23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec  4 05:59:17 np0005545273 podman[272343]: 2025-12-04 10:59:17.908750898 +0000 UTC m=+0.139786464 container start ed2c3af69f7f20a5dde0af7cc1fd0d5dba8e3cd83de173859cadd0fed0863b23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_visvesvaraya, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec  4 05:59:17 np0005545273 podman[272343]: 2025-12-04 10:59:17.913897205 +0000 UTC m=+0.144932801 container attach ed2c3af69f7f20a5dde0af7cc1fd0d5dba8e3cd83de173859cadd0fed0863b23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  4 05:59:17 np0005545273 ecstatic_visvesvaraya[272359]: 167 167
Dec  4 05:59:17 np0005545273 systemd[1]: libpod-ed2c3af69f7f20a5dde0af7cc1fd0d5dba8e3cd83de173859cadd0fed0863b23.scope: Deactivated successfully.
Dec  4 05:59:17 np0005545273 podman[272343]: 2025-12-04 10:59:17.91860537 +0000 UTC m=+0.149640936 container died ed2c3af69f7f20a5dde0af7cc1fd0d5dba8e3cd83de173859cadd0fed0863b23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_visvesvaraya, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 05:59:17 np0005545273 systemd[1]: var-lib-containers-storage-overlay-feb548d5bd6bcb2672dc4be056990c8401b41b1fab696133427df491cf9f8408-merged.mount: Deactivated successfully.
Dec  4 05:59:17 np0005545273 podman[272343]: 2025-12-04 10:59:17.963313878 +0000 UTC m=+0.194349444 container remove ed2c3af69f7f20a5dde0af7cc1fd0d5dba8e3cd83de173859cadd0fed0863b23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:59:17 np0005545273 systemd[1]: libpod-conmon-ed2c3af69f7f20a5dde0af7cc1fd0d5dba8e3cd83de173859cadd0fed0863b23.scope: Deactivated successfully.
Dec  4 05:59:18 np0005545273 podman[272382]: 2025-12-04 10:59:18.126801074 +0000 UTC m=+0.038992529 container create 9c52604297ddc7852b4e0e6e3f43a8302a08bc2a288920e1f0872394cd05d123 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_curran, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:59:18 np0005545273 systemd[1]: Started libpod-conmon-9c52604297ddc7852b4e0e6e3f43a8302a08bc2a288920e1f0872394cd05d123.scope.
Dec  4 05:59:18 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:59:18 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eda6c3a40df88c8ada2629beb653442d650cc7aaea6a94a94ab8794fc1f5f1f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:59:18 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eda6c3a40df88c8ada2629beb653442d650cc7aaea6a94a94ab8794fc1f5f1f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:59:18 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eda6c3a40df88c8ada2629beb653442d650cc7aaea6a94a94ab8794fc1f5f1f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:59:18 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eda6c3a40df88c8ada2629beb653442d650cc7aaea6a94a94ab8794fc1f5f1f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:59:18 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eda6c3a40df88c8ada2629beb653442d650cc7aaea6a94a94ab8794fc1f5f1f4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 05:59:18 np0005545273 podman[272382]: 2025-12-04 10:59:18.205686551 +0000 UTC m=+0.117878026 container init 9c52604297ddc7852b4e0e6e3f43a8302a08bc2a288920e1f0872394cd05d123 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_curran, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:59:18 np0005545273 podman[272382]: 2025-12-04 10:59:18.110843881 +0000 UTC m=+0.023035356 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:59:18 np0005545273 podman[272382]: 2025-12-04 10:59:18.21462772 +0000 UTC m=+0.126819175 container start 9c52604297ddc7852b4e0e6e3f43a8302a08bc2a288920e1f0872394cd05d123 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_curran, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:59:18 np0005545273 podman[272382]: 2025-12-04 10:59:18.218387543 +0000 UTC m=+0.130579008 container attach 9c52604297ddc7852b4e0e6e3f43a8302a08bc2a288920e1f0872394cd05d123 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec  4 05:59:18 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1502: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:59:18 np0005545273 objective_curran[272399]: --> passed data devices: 0 physical, 3 LVM
Dec  4 05:59:18 np0005545273 objective_curran[272399]: --> All data devices are unavailable
Dec  4 05:59:18 np0005545273 systemd[1]: libpod-9c52604297ddc7852b4e0e6e3f43a8302a08bc2a288920e1f0872394cd05d123.scope: Deactivated successfully.
Dec  4 05:59:18 np0005545273 podman[272382]: 2025-12-04 10:59:18.680214195 +0000 UTC m=+0.592405650 container died 9c52604297ddc7852b4e0e6e3f43a8302a08bc2a288920e1f0872394cd05d123 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_curran, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec  4 05:59:18 np0005545273 systemd[1]: var-lib-containers-storage-overlay-eda6c3a40df88c8ada2629beb653442d650cc7aaea6a94a94ab8794fc1f5f1f4-merged.mount: Deactivated successfully.
Dec  4 05:59:19 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:59:20 np0005545273 podman[272382]: 2025-12-04 10:59:20.343619448 +0000 UTC m=+2.255810913 container remove 9c52604297ddc7852b4e0e6e3f43a8302a08bc2a288920e1f0872394cd05d123 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_curran, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  4 05:59:20 np0005545273 systemd[1]: libpod-conmon-9c52604297ddc7852b4e0e6e3f43a8302a08bc2a288920e1f0872394cd05d123.scope: Deactivated successfully.
Dec  4 05:59:20 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1503: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:59:20 np0005545273 podman[272493]: 2025-12-04 10:59:20.815003935 +0000 UTC m=+0.021986221 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:59:20 np0005545273 podman[272493]: 2025-12-04 10:59:20.93288052 +0000 UTC m=+0.139862786 container create 6537a8c2d03547aa4851c0ea11947d5c3d9d84e8122c7010b120f9fe7ea787a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_ride, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:59:20 np0005545273 systemd[1]: Started libpod-conmon-6537a8c2d03547aa4851c0ea11947d5c3d9d84e8122c7010b120f9fe7ea787a7.scope.
Dec  4 05:59:20 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:59:21 np0005545273 podman[272493]: 2025-12-04 10:59:21.006728363 +0000 UTC m=+0.213710659 container init 6537a8c2d03547aa4851c0ea11947d5c3d9d84e8122c7010b120f9fe7ea787a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_ride, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec  4 05:59:21 np0005545273 podman[272493]: 2025-12-04 10:59:21.013022118 +0000 UTC m=+0.220004384 container start 6537a8c2d03547aa4851c0ea11947d5c3d9d84e8122c7010b120f9fe7ea787a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_ride, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec  4 05:59:21 np0005545273 podman[272493]: 2025-12-04 10:59:21.016611337 +0000 UTC m=+0.223593693 container attach 6537a8c2d03547aa4851c0ea11947d5c3d9d84e8122c7010b120f9fe7ea787a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_ride, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:59:21 np0005545273 optimistic_ride[272509]: 167 167
Dec  4 05:59:21 np0005545273 systemd[1]: libpod-6537a8c2d03547aa4851c0ea11947d5c3d9d84e8122c7010b120f9fe7ea787a7.scope: Deactivated successfully.
Dec  4 05:59:21 np0005545273 podman[272493]: 2025-12-04 10:59:21.019650781 +0000 UTC m=+0.226633067 container died 6537a8c2d03547aa4851c0ea11947d5c3d9d84e8122c7010b120f9fe7ea787a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:59:21 np0005545273 systemd[1]: var-lib-containers-storage-overlay-ea3bfa1e9f8733e48f3acd06c90a2b8245a327f110e12b33e52ecfaad2d70ff3-merged.mount: Deactivated successfully.
Dec  4 05:59:21 np0005545273 podman[272493]: 2025-12-04 10:59:21.058524635 +0000 UTC m=+0.265506901 container remove 6537a8c2d03547aa4851c0ea11947d5c3d9d84e8122c7010b120f9fe7ea787a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec  4 05:59:21 np0005545273 systemd[1]: libpod-conmon-6537a8c2d03547aa4851c0ea11947d5c3d9d84e8122c7010b120f9fe7ea787a7.scope: Deactivated successfully.
Dec  4 05:59:21 np0005545273 podman[272534]: 2025-12-04 10:59:21.196952815 +0000 UTC m=+0.022537504 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:59:21 np0005545273 podman[272534]: 2025-12-04 10:59:21.478402467 +0000 UTC m=+0.303987136 container create 2fb554606f8540067d273fa95273927b286865096dc75b2fedc6a3b3737dac3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec  4 05:59:21 np0005545273 systemd[1]: Started libpod-conmon-2fb554606f8540067d273fa95273927b286865096dc75b2fedc6a3b3737dac3b.scope.
Dec  4 05:59:21 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:59:21 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b540ffe170e77cb54c9e6b2467daf48cdcbbaa1992521600db07fbce8abe42dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:59:21 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b540ffe170e77cb54c9e6b2467daf48cdcbbaa1992521600db07fbce8abe42dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:59:21 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b540ffe170e77cb54c9e6b2467daf48cdcbbaa1992521600db07fbce8abe42dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:59:21 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b540ffe170e77cb54c9e6b2467daf48cdcbbaa1992521600db07fbce8abe42dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:59:21 np0005545273 podman[272534]: 2025-12-04 10:59:21.706729085 +0000 UTC m=+0.532313784 container init 2fb554606f8540067d273fa95273927b286865096dc75b2fedc6a3b3737dac3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  4 05:59:21 np0005545273 podman[272534]: 2025-12-04 10:59:21.723013595 +0000 UTC m=+0.548598264 container start 2fb554606f8540067d273fa95273927b286865096dc75b2fedc6a3b3737dac3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec  4 05:59:21 np0005545273 podman[272534]: 2025-12-04 10:59:21.908828499 +0000 UTC m=+0.734413198 container attach 2fb554606f8540067d273fa95273927b286865096dc75b2fedc6a3b3737dac3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]: {
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:    "0": [
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:        {
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            "devices": [
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "/dev/loop3"
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            ],
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            "lv_name": "ceph_lv0",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            "lv_size": "21470642176",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            "name": "ceph_lv0",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            "tags": {
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.cluster_name": "ceph",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.crush_device_class": "",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.encrypted": "0",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.objectstore": "bluestore",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.osd_id": "0",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.type": "block",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.vdo": "0",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.with_tpm": "0"
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            },
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            "type": "block",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            "vg_name": "ceph_vg0"
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:        }
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:    ],
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:    "1": [
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:        {
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            "devices": [
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "/dev/loop4"
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            ],
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            "lv_name": "ceph_lv1",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            "lv_size": "21470642176",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            "name": "ceph_lv1",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            "tags": {
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.cluster_name": "ceph",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.crush_device_class": "",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.encrypted": "0",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.objectstore": "bluestore",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.osd_id": "1",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.type": "block",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.vdo": "0",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.with_tpm": "0"
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            },
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            "type": "block",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            "vg_name": "ceph_vg1"
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:        }
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:    ],
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:    "2": [
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:        {
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            "devices": [
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "/dev/loop5"
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            ],
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            "lv_name": "ceph_lv2",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            "lv_size": "21470642176",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            "name": "ceph_lv2",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            "tags": {
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.cephx_lockbox_secret": "",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.cluster_name": "ceph",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.crush_device_class": "",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.encrypted": "0",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.objectstore": "bluestore",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.osd_id": "2",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.type": "block",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.vdo": "0",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:                "ceph.with_tpm": "0"
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            },
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            "type": "block",
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:            "vg_name": "ceph_vg2"
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:        }
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]:    ]
Dec  4 05:59:21 np0005545273 great_dijkstra[272551]: }
Dec  4 05:59:22 np0005545273 systemd[1]: libpod-2fb554606f8540067d273fa95273927b286865096dc75b2fedc6a3b3737dac3b.scope: Deactivated successfully.
Dec  4 05:59:22 np0005545273 podman[272534]: 2025-12-04 10:59:22.030730073 +0000 UTC m=+0.856314742 container died 2fb554606f8540067d273fa95273927b286865096dc75b2fedc6a3b3737dac3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_dijkstra, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Dec  4 05:59:22 np0005545273 systemd[1]: var-lib-containers-storage-overlay-b540ffe170e77cb54c9e6b2467daf48cdcbbaa1992521600db07fbce8abe42dc-merged.mount: Deactivated successfully.
Dec  4 05:59:22 np0005545273 podman[272534]: 2025-12-04 10:59:22.078825764 +0000 UTC m=+0.904410433 container remove 2fb554606f8540067d273fa95273927b286865096dc75b2fedc6a3b3737dac3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 05:59:22 np0005545273 systemd[1]: libpod-conmon-2fb554606f8540067d273fa95273927b286865096dc75b2fedc6a3b3737dac3b.scope: Deactivated successfully.
Dec  4 05:59:22 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1504: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:59:22 np0005545273 podman[272634]: 2025-12-04 10:59:22.550621331 +0000 UTC m=+0.042917524 container create d951503602989c33b8019993e1657eb0dee3d4b609e676afb6eb7600f8b4ca03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_grothendieck, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  4 05:59:22 np0005545273 systemd[1]: Started libpod-conmon-d951503602989c33b8019993e1657eb0dee3d4b609e676afb6eb7600f8b4ca03.scope.
Dec  4 05:59:22 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:59:22 np0005545273 podman[272634]: 2025-12-04 10:59:22.530889407 +0000 UTC m=+0.023185650 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:59:22 np0005545273 podman[272634]: 2025-12-04 10:59:22.624761782 +0000 UTC m=+0.117057975 container init d951503602989c33b8019993e1657eb0dee3d4b609e676afb6eb7600f8b4ca03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_grothendieck, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec  4 05:59:22 np0005545273 podman[272634]: 2025-12-04 10:59:22.633052036 +0000 UTC m=+0.125348229 container start d951503602989c33b8019993e1657eb0dee3d4b609e676afb6eb7600f8b4ca03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_grothendieck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  4 05:59:22 np0005545273 podman[272634]: 2025-12-04 10:59:22.637152746 +0000 UTC m=+0.129448959 container attach d951503602989c33b8019993e1657eb0dee3d4b609e676afb6eb7600f8b4ca03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_grothendieck, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:59:22 np0005545273 musing_grothendieck[272650]: 167 167
Dec  4 05:59:22 np0005545273 systemd[1]: libpod-d951503602989c33b8019993e1657eb0dee3d4b609e676afb6eb7600f8b4ca03.scope: Deactivated successfully.
Dec  4 05:59:22 np0005545273 podman[272634]: 2025-12-04 10:59:22.638642693 +0000 UTC m=+0.130938886 container died d951503602989c33b8019993e1657eb0dee3d4b609e676afb6eb7600f8b4ca03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_grothendieck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Dec  4 05:59:22 np0005545273 systemd[1]: var-lib-containers-storage-overlay-0b9bc19fa99b7d819aab9aa86d070e5367747894db9c4cb808dbd38220ef7a75-merged.mount: Deactivated successfully.
Dec  4 05:59:22 np0005545273 podman[272634]: 2025-12-04 10:59:22.677742283 +0000 UTC m=+0.170038476 container remove d951503602989c33b8019993e1657eb0dee3d4b609e676afb6eb7600f8b4ca03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_grothendieck, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:59:22 np0005545273 systemd[1]: libpod-conmon-d951503602989c33b8019993e1657eb0dee3d4b609e676afb6eb7600f8b4ca03.scope: Deactivated successfully.
Dec  4 05:59:22 np0005545273 podman[272674]: 2025-12-04 10:59:22.859919397 +0000 UTC m=+0.049506286 container create a1dc361439deaac90fbbfc0102d1bd555b76aa6e40045409233e525ef31e8452 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ramanujan, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:59:22 np0005545273 systemd[1]: Started libpod-conmon-a1dc361439deaac90fbbfc0102d1bd555b76aa6e40045409233e525ef31e8452.scope.
Dec  4 05:59:22 np0005545273 systemd[1]: Started libcrun container.
Dec  4 05:59:22 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77a566a991e7cbe1223fbe6501502e593cf3faa88ff5ff8dbd0050dfdb3ffb77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 05:59:22 np0005545273 podman[272674]: 2025-12-04 10:59:22.836995125 +0000 UTC m=+0.026582074 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 05:59:22 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77a566a991e7cbe1223fbe6501502e593cf3faa88ff5ff8dbd0050dfdb3ffb77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 05:59:22 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77a566a991e7cbe1223fbe6501502e593cf3faa88ff5ff8dbd0050dfdb3ffb77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 05:59:22 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77a566a991e7cbe1223fbe6501502e593cf3faa88ff5ff8dbd0050dfdb3ffb77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 05:59:22 np0005545273 podman[272674]: 2025-12-04 10:59:22.943874619 +0000 UTC m=+0.133461528 container init a1dc361439deaac90fbbfc0102d1bd555b76aa6e40045409233e525ef31e8452 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ramanujan, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 05:59:22 np0005545273 podman[272674]: 2025-12-04 10:59:22.951110197 +0000 UTC m=+0.140697086 container start a1dc361439deaac90fbbfc0102d1bd555b76aa6e40045409233e525ef31e8452 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ramanujan, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 05:59:22 np0005545273 podman[272674]: 2025-12-04 10:59:22.9544761 +0000 UTC m=+0.144062999 container attach a1dc361439deaac90fbbfc0102d1bd555b76aa6e40045409233e525ef31e8452 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec  4 05:59:23 np0005545273 lvm[272768]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 05:59:23 np0005545273 lvm[272769]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 05:59:23 np0005545273 lvm[272768]: VG ceph_vg0 finished
Dec  4 05:59:23 np0005545273 lvm[272769]: VG ceph_vg1 finished
Dec  4 05:59:23 np0005545273 lvm[272771]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 05:59:23 np0005545273 lvm[272771]: VG ceph_vg2 finished
Dec  4 05:59:23 np0005545273 boring_ramanujan[272690]: {}
Dec  4 05:59:23 np0005545273 systemd[1]: libpod-a1dc361439deaac90fbbfc0102d1bd555b76aa6e40045409233e525ef31e8452.scope: Deactivated successfully.
Dec  4 05:59:23 np0005545273 systemd[1]: libpod-a1dc361439deaac90fbbfc0102d1bd555b76aa6e40045409233e525ef31e8452.scope: Consumed 1.397s CPU time.
Dec  4 05:59:23 np0005545273 podman[272674]: 2025-12-04 10:59:23.810921614 +0000 UTC m=+1.000508523 container died a1dc361439deaac90fbbfc0102d1bd555b76aa6e40045409233e525ef31e8452 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 05:59:23 np0005545273 systemd[1]: var-lib-containers-storage-overlay-77a566a991e7cbe1223fbe6501502e593cf3faa88ff5ff8dbd0050dfdb3ffb77-merged.mount: Deactivated successfully.
Dec  4 05:59:23 np0005545273 podman[272674]: 2025-12-04 10:59:23.860511262 +0000 UTC m=+1.050098151 container remove a1dc361439deaac90fbbfc0102d1bd555b76aa6e40045409233e525ef31e8452 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ramanujan, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 05:59:23 np0005545273 systemd[1]: libpod-conmon-a1dc361439deaac90fbbfc0102d1bd555b76aa6e40045409233e525ef31e8452.scope: Deactivated successfully.
Dec  4 05:59:23 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 05:59:23 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:59:23 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 05:59:23 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:59:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:59:24 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Dec  4 05:59:24 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:59:24.243929) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  4 05:59:24 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Dec  4 05:59:24 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845964244283, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 721, "num_deletes": 257, "total_data_size": 914605, "memory_usage": 928232, "flush_reason": "Manual Compaction"}
Dec  4 05:59:24 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Dec  4 05:59:24 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845964253386, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 906639, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33405, "largest_seqno": 34125, "table_properties": {"data_size": 902814, "index_size": 1605, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8285, "raw_average_key_size": 18, "raw_value_size": 895224, "raw_average_value_size": 2039, "num_data_blocks": 72, "num_entries": 439, "num_filter_entries": 439, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764845905, "oldest_key_time": 1764845905, "file_creation_time": 1764845964, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:59:24 np0005545273 ceph-mon[75358]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 9489 microseconds, and 3980 cpu microseconds.
Dec  4 05:59:24 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  4 05:59:24 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:59:24.253439) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 906639 bytes OK
Dec  4 05:59:24 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:59:24.253471) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Dec  4 05:59:24 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:59:24.255700) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Dec  4 05:59:24 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:59:24.255717) EVENT_LOG_v1 {"time_micros": 1764845964255710, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  4 05:59:24 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:59:24.255739) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  4 05:59:24 np0005545273 ceph-mon[75358]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 910861, prev total WAL file size 910861, number of live WAL files 2.
Dec  4 05:59:24 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:59:24 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:59:24.256396) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303036' seq:72057594037927935, type:22 .. '6C6F676D0031323539' seq:0, type:0; will stop at (end)
Dec  4 05:59:24 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  4 05:59:24 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(885KB)], [71(8577KB)]
Dec  4 05:59:24 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845964256442, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 9690386, "oldest_snapshot_seqno": -1}
Dec  4 05:59:24 np0005545273 ceph-mon[75358]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6247 keys, 9444706 bytes, temperature: kUnknown
Dec  4 05:59:24 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845964314462, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 9444706, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9403589, "index_size": 24367, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15685, "raw_key_size": 158761, "raw_average_key_size": 25, "raw_value_size": 9292219, "raw_average_value_size": 1487, "num_data_blocks": 988, "num_entries": 6247, "num_filter_entries": 6247, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764843243, "oldest_key_time": 0, "file_creation_time": 1764845964, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bea4932-39ce-4c6c-8b9b-253595ae5108", "db_session_id": "Y30CWPND84TKXOFWI6NG", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Dec  4 05:59:24 np0005545273 ceph-mon[75358]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  4 05:59:24 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:59:24.314722) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 9444706 bytes
Dec  4 05:59:24 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:59:24.316300) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.8 rd, 162.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 8.4 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(21.1) write-amplify(10.4) OK, records in: 6773, records dropped: 526 output_compression: NoCompression
Dec  4 05:59:24 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:59:24.316316) EVENT_LOG_v1 {"time_micros": 1764845964316307, "job": 40, "event": "compaction_finished", "compaction_time_micros": 58110, "compaction_time_cpu_micros": 25473, "output_level": 6, "num_output_files": 1, "total_output_size": 9444706, "num_input_records": 6773, "num_output_records": 6247, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  4 05:59:24 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:59:24 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845964316553, "job": 40, "event": "table_file_deletion", "file_number": 73}
Dec  4 05:59:24 np0005545273 ceph-mon[75358]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  4 05:59:24 np0005545273 ceph-mon[75358]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764845964318037, "job": 40, "event": "table_file_deletion", "file_number": 71}
Dec  4 05:59:24 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:59:24.256290) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:59:24 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:59:24.318064) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:59:24 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:59:24.318068) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:59:24 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:59:24.318069) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:59:24 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:59:24.318072) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:59:24 np0005545273 ceph-mon[75358]: rocksdb: (Original Log Time 2025/12/04-10:59:24.318073) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  4 05:59:24 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1505: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:59:24 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:59:24 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 05:59:26 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1506: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:59:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_10:59:26
Dec  4 05:59:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 05:59:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 05:59:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'volumes', '.rgw.root', 'backups', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'images']
Dec  4 05:59:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 05:59:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:59:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:59:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 05:59:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:59:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:59:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:59:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:59:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 05:59:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 05:59:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 05:59:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 05:59:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 05:59:28 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1507: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:59:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:59:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:59:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:59:29 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:59:29 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:59:29 np0005545273 podman[272812]: 2025-12-04 10:59:29.970992904 +0000 UTC m=+0.077286239 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec  4 05:59:30 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1508: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:59:32 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1509: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:59:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:59:34 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1510: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:59:36 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1511: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:59:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 05:59:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:59:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 05:59:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:59:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:59:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:59:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:59:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:59:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:59:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:59:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006660929475746917 of space, bias 1.0, pg target 0.19982788427240752 quantized to 32 (current 32)
Dec  4 05:59:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:59:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0006150863533444786 of space, bias 4.0, pg target 0.7381036240133744 quantized to 16 (current 32)
Dec  4 05:59:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:59:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Dec  4 05:59:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:59:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 05:59:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:59:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 05:59:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:59:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 05:59:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 05:59:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 05:59:38 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1512: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:59:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:59:39 np0005545273 podman[272834]: 2025-12-04 10:59:39.974539818 +0000 UTC m=+0.079859732 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3)
Dec  4 05:59:39 np0005545273 podman[272835]: 2025-12-04 10:59:39.972974169 +0000 UTC m=+0.075663189 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec  4 05:59:40 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1513: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:59:42 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1514: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:59:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:59:44 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1515: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:59:46 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1516: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:59:48 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1517: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:59:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:59:50 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1518: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:59:52 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1519: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:59:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:59:54 np0005545273 nova_compute[244644]: 2025-12-04 10:59:54.444 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:59:54 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1520: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:59:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:59:54.931 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:59:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:59:54.931 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:59:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 10:59:54.932 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:59:56 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1521: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:59:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:59:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:59:58 np0005545273 nova_compute[244644]: 2025-12-04 10:59:58.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:59:58 np0005545273 nova_compute[244644]: 2025-12-04 10:59:58.338 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  4 05:59:58 np0005545273 nova_compute[244644]: 2025-12-04 10:59:58.338 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  4 05:59:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:59:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 05:59:58 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1522: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 05:59:58 np0005545273 nova_compute[244644]: 2025-12-04 10:59:58.637 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  4 05:59:58 np0005545273 nova_compute[244644]: 2025-12-04 10:59:58.637 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:59:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 05:59:59 np0005545273 nova_compute[244644]: 2025-12-04 10:59:59.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 05:59:59 np0005545273 nova_compute[244644]: 2025-12-04 10:59:59.544 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 05:59:59 np0005545273 nova_compute[244644]: 2025-12-04 10:59:59.545 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 05:59:59 np0005545273 nova_compute[244644]: 2025-12-04 10:59:59.545 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 05:59:59 np0005545273 nova_compute[244644]: 2025-12-04 10:59:59.545 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  4 05:59:59 np0005545273 nova_compute[244644]: 2025-12-04 10:59:59.546 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 05:59:59 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 05:59:59 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 06:00:00 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 06:00:00 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3844193991' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 06:00:00 np0005545273 nova_compute[244644]: 2025-12-04 11:00:00.158 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 06:00:00 np0005545273 nova_compute[244644]: 2025-12-04 11:00:00.342 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  4 06:00:00 np0005545273 nova_compute[244644]: 2025-12-04 11:00:00.344 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4928MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  4 06:00:00 np0005545273 nova_compute[244644]: 2025-12-04 11:00:00.344 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 06:00:00 np0005545273 nova_compute[244644]: 2025-12-04 11:00:00.345 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 06:00:00 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1523: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:00:00 np0005545273 nova_compute[244644]: 2025-12-04 11:00:00.896 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  4 06:00:00 np0005545273 nova_compute[244644]: 2025-12-04 11:00:00.896 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  4 06:00:00 np0005545273 nova_compute[244644]: 2025-12-04 11:00:00.921 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 06:00:00 np0005545273 podman[272902]: 2025-12-04 11:00:00.956178975 +0000 UTC m=+0.065537811 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 06:00:01 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 06:00:01 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/281020915' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 06:00:01 np0005545273 nova_compute[244644]: 2025-12-04 11:00:01.517 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 06:00:01 np0005545273 nova_compute[244644]: 2025-12-04 11:00:01.523 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  4 06:00:01 np0005545273 nova_compute[244644]: 2025-12-04 11:00:01.645 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  4 06:00:01 np0005545273 nova_compute[244644]: 2025-12-04 11:00:01.647 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  4 06:00:01 np0005545273 nova_compute[244644]: 2025-12-04 11:00:01.647 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.302s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 06:00:02 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1524: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:00:03 np0005545273 nova_compute[244644]: 2025-12-04 11:00:03.642 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 06:00:03 np0005545273 nova_compute[244644]: 2025-12-04 11:00:03.678 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 06:00:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 06:00:04 np0005545273 nova_compute[244644]: 2025-12-04 11:00:04.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 06:00:04 np0005545273 nova_compute[244644]: 2025-12-04 11:00:04.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 06:00:04 np0005545273 nova_compute[244644]: 2025-12-04 11:00:04.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 06:00:04 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1525: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:00:05 np0005545273 nova_compute[244644]: 2025-12-04 11:00:05.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 06:00:05 np0005545273 nova_compute[244644]: 2025-12-04 11:00:05.338 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  4 06:00:06 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1526: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:00:08 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1527: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:00:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 06:00:10 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1528: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:00:10 np0005545273 podman[272946]: 2025-12-04 11:00:10.941308167 +0000 UTC m=+0.047851635 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec  4 06:00:10 np0005545273 podman[272945]: 2025-12-04 11:00:10.97231109 +0000 UTC m=+0.080406406 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  4 06:00:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  4 06:00:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3592014196' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec  4 06:00:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  4 06:00:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3592014196' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec  4 06:00:12 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1529: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:00:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 06:00:14 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1530: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:00:16 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1531: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:00:18 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1532: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:00:19 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 06:00:20 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1533: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:00:22 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1534: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:00:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 06:00:24 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1535: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:00:24 np0005545273 podman[273079]: 2025-12-04 11:00:24.580580666 +0000 UTC m=+0.067722885 container exec 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec  4 06:00:24 np0005545273 podman[273079]: 2025-12-04 11:00:24.704445577 +0000 UTC m=+0.191587786 container exec_died 5c64ed29fbafc21d1bc456fa5f9e4b7b43fdd397719a96ad439fe78863243e88 (image=quay.io/ceph/ceph:v20, name=ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mon-compute-0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  4 06:00:25 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 06:00:25 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 06:00:25 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 06:00:25 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 06:00:25 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 06:00:25 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 06:00:26 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 06:00:26 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 06:00:26 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 06:00:26 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 06:00:26 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 06:00:26 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 06:00:26 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 06:00:26 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 06:00:26 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 06:00:26 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 06:00:26 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 06:00:26 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 06:00:26 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1536: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:00:26 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 06:00:26 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 06:00:26 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 06:00:26 np0005545273 podman[273406]: 2025-12-04 11:00:26.557188471 +0000 UTC m=+0.039748328 container create 00bf8313b8db2faf808ab65798af01b612efd7f73dfc00abf953769f8f3c49b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bouman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 06:00:26 np0005545273 systemd[1]: Started libpod-conmon-00bf8313b8db2faf808ab65798af01b612efd7f73dfc00abf953769f8f3c49b5.scope.
Dec  4 06:00:26 np0005545273 systemd[1]: Started libcrun container.
Dec  4 06:00:26 np0005545273 podman[273406]: 2025-12-04 11:00:26.540092191 +0000 UTC m=+0.022652068 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 06:00:26 np0005545273 podman[273406]: 2025-12-04 11:00:26.652763738 +0000 UTC m=+0.135323615 container init 00bf8313b8db2faf808ab65798af01b612efd7f73dfc00abf953769f8f3c49b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec  4 06:00:26 np0005545273 podman[273406]: 2025-12-04 11:00:26.661667937 +0000 UTC m=+0.144227794 container start 00bf8313b8db2faf808ab65798af01b612efd7f73dfc00abf953769f8f3c49b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bouman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  4 06:00:26 np0005545273 podman[273406]: 2025-12-04 11:00:26.664873675 +0000 UTC m=+0.147433532 container attach 00bf8313b8db2faf808ab65798af01b612efd7f73dfc00abf953769f8f3c49b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bouman, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec  4 06:00:26 np0005545273 hungry_bouman[273423]: 167 167
Dec  4 06:00:26 np0005545273 systemd[1]: libpod-00bf8313b8db2faf808ab65798af01b612efd7f73dfc00abf953769f8f3c49b5.scope: Deactivated successfully.
Dec  4 06:00:26 np0005545273 podman[273406]: 2025-12-04 11:00:26.669305785 +0000 UTC m=+0.151865692 container died 00bf8313b8db2faf808ab65798af01b612efd7f73dfc00abf953769f8f3c49b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 06:00:26 np0005545273 systemd[1]: var-lib-containers-storage-overlay-5d1f60c359dbc4940c5c3c7f8c2b5372049ee72f657ced325ddb3b867f8494ce-merged.mount: Deactivated successfully.
Dec  4 06:00:26 np0005545273 podman[273406]: 2025-12-04 11:00:26.713353536 +0000 UTC m=+0.195913393 container remove 00bf8313b8db2faf808ab65798af01b612efd7f73dfc00abf953769f8f3c49b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  4 06:00:26 np0005545273 systemd[1]: libpod-conmon-00bf8313b8db2faf808ab65798af01b612efd7f73dfc00abf953769f8f3c49b5.scope: Deactivated successfully.
Dec  4 06:00:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_11:00:26
Dec  4 06:00:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 06:00:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 06:00:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'images', 'cephfs.cephfs.meta', 'vms', '.mgr', 'volumes', '.rgw.root', 'default.rgw.log']
Dec  4 06:00:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 06:00:26 np0005545273 podman[273447]: 2025-12-04 11:00:26.885469093 +0000 UTC m=+0.052464499 container create 8d896cd7d4a7756a92f91bf585f112bc6c259483eaa2f8def9d2a5e10f93d414 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 06:00:26 np0005545273 systemd[1]: Started libpod-conmon-8d896cd7d4a7756a92f91bf585f112bc6c259483eaa2f8def9d2a5e10f93d414.scope.
Dec  4 06:00:26 np0005545273 systemd[1]: Started libcrun container.
Dec  4 06:00:26 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62484d0afe0981f228268fdac390d669bf04d83ce0b20f6c865407a617e897f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 06:00:26 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62484d0afe0981f228268fdac390d669bf04d83ce0b20f6c865407a617e897f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 06:00:26 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62484d0afe0981f228268fdac390d669bf04d83ce0b20f6c865407a617e897f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 06:00:26 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62484d0afe0981f228268fdac390d669bf04d83ce0b20f6c865407a617e897f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 06:00:26 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62484d0afe0981f228268fdac390d669bf04d83ce0b20f6c865407a617e897f6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 06:00:26 np0005545273 podman[273447]: 2025-12-04 11:00:26.857923297 +0000 UTC m=+0.024918793 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 06:00:26 np0005545273 podman[273447]: 2025-12-04 11:00:26.957251166 +0000 UTC m=+0.124246592 container init 8d896cd7d4a7756a92f91bf585f112bc6c259483eaa2f8def9d2a5e10f93d414 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec  4 06:00:26 np0005545273 podman[273447]: 2025-12-04 11:00:26.962777262 +0000 UTC m=+0.129772668 container start 8d896cd7d4a7756a92f91bf585f112bc6c259483eaa2f8def9d2a5e10f93d414 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mendel, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec  4 06:00:26 np0005545273 podman[273447]: 2025-12-04 11:00:26.966718299 +0000 UTC m=+0.133713725 container attach 8d896cd7d4a7756a92f91bf585f112bc6c259483eaa2f8def9d2a5e10f93d414 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mendel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  4 06:00:27 np0005545273 cranky_mendel[273463]: --> passed data devices: 0 physical, 3 LVM
Dec  4 06:00:27 np0005545273 cranky_mendel[273463]: --> All data devices are unavailable
Dec  4 06:00:27 np0005545273 systemd[1]: libpod-8d896cd7d4a7756a92f91bf585f112bc6c259483eaa2f8def9d2a5e10f93d414.scope: Deactivated successfully.
Dec  4 06:00:27 np0005545273 podman[273447]: 2025-12-04 11:00:27.429581777 +0000 UTC m=+0.596577183 container died 8d896cd7d4a7756a92f91bf585f112bc6c259483eaa2f8def9d2a5e10f93d414 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mendel, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec  4 06:00:27 np0005545273 systemd[1]: var-lib-containers-storage-overlay-62484d0afe0981f228268fdac390d669bf04d83ce0b20f6c865407a617e897f6-merged.mount: Deactivated successfully.
Dec  4 06:00:27 np0005545273 podman[273447]: 2025-12-04 11:00:27.474853589 +0000 UTC m=+0.641848995 container remove 8d896cd7d4a7756a92f91bf585f112bc6c259483eaa2f8def9d2a5e10f93d414 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mendel, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 06:00:27 np0005545273 systemd[1]: libpod-conmon-8d896cd7d4a7756a92f91bf585f112bc6c259483eaa2f8def9d2a5e10f93d414.scope: Deactivated successfully.
Dec  4 06:00:27 np0005545273 podman[273557]: 2025-12-04 11:00:27.976230602 +0000 UTC m=+0.039691555 container create 4ca27acd15ccbcd74e01c61f6756e3a1515237c81152c83098e57aaa62e5006d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 06:00:28 np0005545273 systemd[1]: Started libpod-conmon-4ca27acd15ccbcd74e01c61f6756e3a1515237c81152c83098e57aaa62e5006d.scope.
Dec  4 06:00:28 np0005545273 systemd[1]: Started libcrun container.
Dec  4 06:00:28 np0005545273 podman[273557]: 2025-12-04 11:00:27.958056586 +0000 UTC m=+0.021517569 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 06:00:28 np0005545273 podman[273557]: 2025-12-04 11:00:28.064216363 +0000 UTC m=+0.127677326 container init 4ca27acd15ccbcd74e01c61f6756e3a1515237c81152c83098e57aaa62e5006d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_lumiere, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 06:00:28 np0005545273 podman[273557]: 2025-12-04 11:00:28.073521592 +0000 UTC m=+0.136982545 container start 4ca27acd15ccbcd74e01c61f6756e3a1515237c81152c83098e57aaa62e5006d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_lumiere, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 06:00:28 np0005545273 podman[273557]: 2025-12-04 11:00:28.077238723 +0000 UTC m=+0.140699676 container attach 4ca27acd15ccbcd74e01c61f6756e3a1515237c81152c83098e57aaa62e5006d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_lumiere, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 06:00:28 np0005545273 nostalgic_lumiere[273574]: 167 167
Dec  4 06:00:28 np0005545273 systemd[1]: libpod-4ca27acd15ccbcd74e01c61f6756e3a1515237c81152c83098e57aaa62e5006d.scope: Deactivated successfully.
Dec  4 06:00:28 np0005545273 podman[273557]: 2025-12-04 11:00:28.082069452 +0000 UTC m=+0.145530425 container died 4ca27acd15ccbcd74e01c61f6756e3a1515237c81152c83098e57aaa62e5006d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_lumiere, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 06:00:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 06:00:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 06:00:28 np0005545273 systemd[1]: var-lib-containers-storage-overlay-2198143474a0656d5632e801956129500f0c8e9460b95bbc617575067e47dc43-merged.mount: Deactivated successfully.
Dec  4 06:00:28 np0005545273 podman[273557]: 2025-12-04 11:00:28.125586881 +0000 UTC m=+0.189047834 container remove 4ca27acd15ccbcd74e01c61f6756e3a1515237c81152c83098e57aaa62e5006d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_lumiere, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  4 06:00:28 np0005545273 systemd[1]: libpod-conmon-4ca27acd15ccbcd74e01c61f6756e3a1515237c81152c83098e57aaa62e5006d.scope: Deactivated successfully.
Dec  4 06:00:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 06:00:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 06:00:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 06:00:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 06:00:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 06:00:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 06:00:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 06:00:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 06:00:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 06:00:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 06:00:28 np0005545273 podman[273595]: 2025-12-04 11:00:28.296794235 +0000 UTC m=+0.045850597 container create 9e46e7cba9e92a0b923d7c1bd3a5d1ec92194c9f1cdb414fbfce6a1c8997364b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_edison, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 06:00:28 np0005545273 systemd[1]: Started libpod-conmon-9e46e7cba9e92a0b923d7c1bd3a5d1ec92194c9f1cdb414fbfce6a1c8997364b.scope.
Dec  4 06:00:28 np0005545273 systemd[1]: Started libcrun container.
Dec  4 06:00:28 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf09b1a069d1bf72e3071bf96a86112aca9b9138a06d5ef760a7835982b5876/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 06:00:28 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf09b1a069d1bf72e3071bf96a86112aca9b9138a06d5ef760a7835982b5876/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 06:00:28 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf09b1a069d1bf72e3071bf96a86112aca9b9138a06d5ef760a7835982b5876/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 06:00:28 np0005545273 podman[273595]: 2025-12-04 11:00:28.27703705 +0000 UTC m=+0.026093432 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 06:00:28 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf09b1a069d1bf72e3071bf96a86112aca9b9138a06d5ef760a7835982b5876/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 06:00:28 np0005545273 podman[273595]: 2025-12-04 11:00:28.383883434 +0000 UTC m=+0.132939816 container init 9e46e7cba9e92a0b923d7c1bd3a5d1ec92194c9f1cdb414fbfce6a1c8997364b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec  4 06:00:28 np0005545273 podman[273595]: 2025-12-04 11:00:28.431925484 +0000 UTC m=+0.180981846 container start 9e46e7cba9e92a0b923d7c1bd3a5d1ec92194c9f1cdb414fbfce6a1c8997364b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_edison, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 06:00:28 np0005545273 podman[273595]: 2025-12-04 11:00:28.436056135 +0000 UTC m=+0.185112517 container attach 9e46e7cba9e92a0b923d7c1bd3a5d1ec92194c9f1cdb414fbfce6a1c8997364b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 06:00:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 06:00:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 06:00:28 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1537: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:00:28 np0005545273 brave_edison[273611]: {
Dec  4 06:00:28 np0005545273 brave_edison[273611]:    "0": [
Dec  4 06:00:28 np0005545273 brave_edison[273611]:        {
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            "devices": [
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "/dev/loop3"
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            ],
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            "lv_name": "ceph_lv0",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            "lv_size": "21470642176",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            "name": "ceph_lv0",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            "tags": {
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.cephx_lockbox_secret": "",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.cluster_name": "ceph",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.crush_device_class": "",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.encrypted": "0",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.objectstore": "bluestore",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.osd_id": "0",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.type": "block",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.vdo": "0",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.with_tpm": "0"
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            },
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            "type": "block",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            "vg_name": "ceph_vg0"
Dec  4 06:00:28 np0005545273 brave_edison[273611]:        }
Dec  4 06:00:28 np0005545273 brave_edison[273611]:    ],
Dec  4 06:00:28 np0005545273 brave_edison[273611]:    "1": [
Dec  4 06:00:28 np0005545273 brave_edison[273611]:        {
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            "devices": [
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "/dev/loop4"
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            ],
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            "lv_name": "ceph_lv1",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            "lv_size": "21470642176",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            "name": "ceph_lv1",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            "tags": {
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.cephx_lockbox_secret": "",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.cluster_name": "ceph",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.crush_device_class": "",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.encrypted": "0",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.objectstore": "bluestore",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.osd_id": "1",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.type": "block",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.vdo": "0",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.with_tpm": "0"
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            },
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            "type": "block",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            "vg_name": "ceph_vg1"
Dec  4 06:00:28 np0005545273 brave_edison[273611]:        }
Dec  4 06:00:28 np0005545273 brave_edison[273611]:    ],
Dec  4 06:00:28 np0005545273 brave_edison[273611]:    "2": [
Dec  4 06:00:28 np0005545273 brave_edison[273611]:        {
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            "devices": [
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "/dev/loop5"
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            ],
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            "lv_name": "ceph_lv2",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            "lv_size": "21470642176",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            "name": "ceph_lv2",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            "tags": {
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.cephx_lockbox_secret": "",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.cluster_name": "ceph",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.crush_device_class": "",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.encrypted": "0",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.objectstore": "bluestore",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.osd_id": "2",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.type": "block",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.vdo": "0",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:                "ceph.with_tpm": "0"
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            },
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            "type": "block",
Dec  4 06:00:28 np0005545273 brave_edison[273611]:            "vg_name": "ceph_vg2"
Dec  4 06:00:28 np0005545273 brave_edison[273611]:        }
Dec  4 06:00:28 np0005545273 brave_edison[273611]:    ]
Dec  4 06:00:28 np0005545273 brave_edison[273611]: }
Dec  4 06:00:28 np0005545273 systemd[1]: libpod-9e46e7cba9e92a0b923d7c1bd3a5d1ec92194c9f1cdb414fbfce6a1c8997364b.scope: Deactivated successfully.
Dec  4 06:00:28 np0005545273 podman[273595]: 2025-12-04 11:00:28.728928299 +0000 UTC m=+0.477984681 container died 9e46e7cba9e92a0b923d7c1bd3a5d1ec92194c9f1cdb414fbfce6a1c8997364b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_edison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 06:00:28 np0005545273 systemd[1]: var-lib-containers-storage-overlay-0cf09b1a069d1bf72e3071bf96a86112aca9b9138a06d5ef760a7835982b5876-merged.mount: Deactivated successfully.
Dec  4 06:00:28 np0005545273 podman[273595]: 2025-12-04 11:00:28.938623918 +0000 UTC m=+0.687680280 container remove 9e46e7cba9e92a0b923d7c1bd3a5d1ec92194c9f1cdb414fbfce6a1c8997364b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 06:00:28 np0005545273 systemd[1]: libpod-conmon-9e46e7cba9e92a0b923d7c1bd3a5d1ec92194c9f1cdb414fbfce6a1c8997364b.scope: Deactivated successfully.
Dec  4 06:00:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 06:00:29 np0005545273 podman[273694]: 2025-12-04 11:00:29.442634017 +0000 UTC m=+0.067620632 container create d942c2aa4cc58121f61b66cfb70eaf09091b529a38ed27a668ddfe098ccd351b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_chandrasekhar, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 06:00:29 np0005545273 systemd[1]: Started libpod-conmon-d942c2aa4cc58121f61b66cfb70eaf09091b529a38ed27a668ddfe098ccd351b.scope.
Dec  4 06:00:29 np0005545273 podman[273694]: 2025-12-04 11:00:29.404003508 +0000 UTC m=+0.028990163 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 06:00:29 np0005545273 systemd[1]: Started libcrun container.
Dec  4 06:00:29 np0005545273 podman[273694]: 2025-12-04 11:00:29.521447613 +0000 UTC m=+0.146434248 container init d942c2aa4cc58121f61b66cfb70eaf09091b529a38ed27a668ddfe098ccd351b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec  4 06:00:29 np0005545273 podman[273694]: 2025-12-04 11:00:29.553662504 +0000 UTC m=+0.178649109 container start d942c2aa4cc58121f61b66cfb70eaf09091b529a38ed27a668ddfe098ccd351b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_chandrasekhar, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  4 06:00:29 np0005545273 podman[273694]: 2025-12-04 11:00:29.558120513 +0000 UTC m=+0.183107148 container attach d942c2aa4cc58121f61b66cfb70eaf09091b529a38ed27a668ddfe098ccd351b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_chandrasekhar, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 06:00:29 np0005545273 peaceful_chandrasekhar[273710]: 167 167
Dec  4 06:00:29 np0005545273 systemd[1]: libpod-d942c2aa4cc58121f61b66cfb70eaf09091b529a38ed27a668ddfe098ccd351b.scope: Deactivated successfully.
Dec  4 06:00:29 np0005545273 conmon[273710]: conmon d942c2aa4cc58121f61b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d942c2aa4cc58121f61b66cfb70eaf09091b529a38ed27a668ddfe098ccd351b.scope/container/memory.events
Dec  4 06:00:29 np0005545273 podman[273694]: 2025-12-04 11:00:29.561687141 +0000 UTC m=+0.186673756 container died d942c2aa4cc58121f61b66cfb70eaf09091b529a38ed27a668ddfe098ccd351b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  4 06:00:29 np0005545273 systemd[1]: var-lib-containers-storage-overlay-f4d2bf931d1ac1c29dad3a1cf907223e1bbff8c76ddf349ef7bcdfe77d3ea2b7-merged.mount: Deactivated successfully.
Dec  4 06:00:29 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 06:00:29 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 06:00:29 np0005545273 podman[273694]: 2025-12-04 11:00:29.868986648 +0000 UTC m=+0.493973263 container remove d942c2aa4cc58121f61b66cfb70eaf09091b529a38ed27a668ddfe098ccd351b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  4 06:00:29 np0005545273 systemd[1]: libpod-conmon-d942c2aa4cc58121f61b66cfb70eaf09091b529a38ed27a668ddfe098ccd351b.scope: Deactivated successfully.
Dec  4 06:00:30 np0005545273 podman[273733]: 2025-12-04 11:00:30.027538202 +0000 UTC m=+0.031747341 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 06:00:30 np0005545273 podman[273733]: 2025-12-04 11:00:30.333020304 +0000 UTC m=+0.337229453 container create 7ddd0c15812ec8f69b7e0415e7b8c352ea160e867ccdc1066e665eefa05e7481 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_noether, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Dec  4 06:00:30 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1538: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:00:30 np0005545273 systemd[1]: Started libpod-conmon-7ddd0c15812ec8f69b7e0415e7b8c352ea160e867ccdc1066e665eefa05e7481.scope.
Dec  4 06:00:30 np0005545273 systemd[1]: Started libcrun container.
Dec  4 06:00:30 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9a896c512ee234fbd818a91b4dd01f9d1706369d2df74bb8536331ee34f8228/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 06:00:30 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9a896c512ee234fbd818a91b4dd01f9d1706369d2df74bb8536331ee34f8228/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 06:00:30 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9a896c512ee234fbd818a91b4dd01f9d1706369d2df74bb8536331ee34f8228/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 06:00:30 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9a896c512ee234fbd818a91b4dd01f9d1706369d2df74bb8536331ee34f8228/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 06:00:30 np0005545273 podman[273733]: 2025-12-04 11:00:30.754530537 +0000 UTC m=+0.758739666 container init 7ddd0c15812ec8f69b7e0415e7b8c352ea160e867ccdc1066e665eefa05e7481 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  4 06:00:30 np0005545273 podman[273733]: 2025-12-04 11:00:30.769615627 +0000 UTC m=+0.773824736 container start 7ddd0c15812ec8f69b7e0415e7b8c352ea160e867ccdc1066e665eefa05e7481 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_noether, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Dec  4 06:00:30 np0005545273 podman[273733]: 2025-12-04 11:00:30.946921222 +0000 UTC m=+0.951130351 container attach 7ddd0c15812ec8f69b7e0415e7b8c352ea160e867ccdc1066e665eefa05e7481 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_noether, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 06:00:31 np0005545273 lvm[273832]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 06:00:31 np0005545273 lvm[273842]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 06:00:31 np0005545273 lvm[273832]: VG ceph_vg0 finished
Dec  4 06:00:31 np0005545273 lvm[273842]: VG ceph_vg1 finished
Dec  4 06:00:31 np0005545273 lvm[273841]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 06:00:31 np0005545273 lvm[273841]: VG ceph_vg2 finished
Dec  4 06:00:31 np0005545273 podman[273824]: 2025-12-04 11:00:31.571466391 +0000 UTC m=+0.100337106 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  4 06:00:31 np0005545273 bold_noether[273749]: {}
Dec  4 06:00:31 np0005545273 systemd[1]: libpod-7ddd0c15812ec8f69b7e0415e7b8c352ea160e867ccdc1066e665eefa05e7481.scope: Deactivated successfully.
Dec  4 06:00:31 np0005545273 systemd[1]: libpod-7ddd0c15812ec8f69b7e0415e7b8c352ea160e867ccdc1066e665eefa05e7481.scope: Consumed 1.421s CPU time.
Dec  4 06:00:31 np0005545273 podman[273733]: 2025-12-04 11:00:31.645875068 +0000 UTC m=+1.650084187 container died 7ddd0c15812ec8f69b7e0415e7b8c352ea160e867ccdc1066e665eefa05e7481 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_noether, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 06:00:31 np0005545273 systemd[1]: var-lib-containers-storage-overlay-e9a896c512ee234fbd818a91b4dd01f9d1706369d2df74bb8536331ee34f8228-merged.mount: Deactivated successfully.
Dec  4 06:00:31 np0005545273 podman[273733]: 2025-12-04 11:00:31.700468499 +0000 UTC m=+1.704677608 container remove 7ddd0c15812ec8f69b7e0415e7b8c352ea160e867ccdc1066e665eefa05e7481 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_noether, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Dec  4 06:00:31 np0005545273 systemd[1]: libpod-conmon-7ddd0c15812ec8f69b7e0415e7b8c352ea160e867ccdc1066e665eefa05e7481.scope: Deactivated successfully.
Dec  4 06:00:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 06:00:31 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 06:00:31 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 06:00:31 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 06:00:32 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1539: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:00:32 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 06:00:32 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 06:00:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 06:00:34 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1540: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:00:36 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1541: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:00:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 06:00:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 06:00:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 06:00:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 06:00:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 06:00:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 06:00:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 06:00:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 06:00:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 06:00:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 06:00:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006660929475746917 of space, bias 1.0, pg target 0.19982788427240752 quantized to 32 (current 32)
Dec  4 06:00:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 06:00:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0006150863533444786 of space, bias 4.0, pg target 0.7381036240133744 quantized to 16 (current 32)
Dec  4 06:00:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 06:00:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Dec  4 06:00:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 06:00:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 06:00:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 06:00:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 06:00:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 06:00:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 06:00:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 06:00:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 06:00:38 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1542: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:00:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 06:00:40 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1543: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:00:41 np0005545273 podman[273890]: 2025-12-04 11:00:41.967921037 +0000 UTC m=+0.068614446 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  4 06:00:42 np0005545273 podman[273889]: 2025-12-04 11:00:42.000167239 +0000 UTC m=+0.101180186 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  4 06:00:42 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1544: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:00:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 06:00:44 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1545: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:00:46 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1546: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:00:48 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1547: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:00:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 06:00:50 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1548: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:00:52 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1549: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:00:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 06:00:54 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1550: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:00:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 11:00:54.932 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 06:00:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 11:00:54.932 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 06:00:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 11:00:54.933 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 06:00:55 np0005545273 nova_compute[244644]: 2025-12-04 11:00:55.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 06:00:56 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1551: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:00:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 06:00:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 06:00:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 06:00:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 06:00:58 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1552: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:00:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 06:00:59 np0005545273 nova_compute[244644]: 2025-12-04 11:00:59.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 06:00:59 np0005545273 nova_compute[244644]: 2025-12-04 11:00:59.338 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  4 06:00:59 np0005545273 nova_compute[244644]: 2025-12-04 11:00:59.338 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  4 06:00:59 np0005545273 nova_compute[244644]: 2025-12-04 11:00:59.363 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  4 06:00:59 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 06:00:59 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 06:01:00 np0005545273 nova_compute[244644]: 2025-12-04 11:01:00.337 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 06:01:00 np0005545273 nova_compute[244644]: 2025-12-04 11:01:00.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 06:01:00 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1553: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:01:00 np0005545273 nova_compute[244644]: 2025-12-04 11:01:00.500 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 06:01:00 np0005545273 nova_compute[244644]: 2025-12-04 11:01:00.501 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 06:01:00 np0005545273 nova_compute[244644]: 2025-12-04 11:01:00.502 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 06:01:00 np0005545273 nova_compute[244644]: 2025-12-04 11:01:00.502 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  4 06:01:00 np0005545273 nova_compute[244644]: 2025-12-04 11:01:00.502 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 06:01:01 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 06:01:01 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3856557719' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 06:01:01 np0005545273 nova_compute[244644]: 2025-12-04 11:01:01.078 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 06:01:01 np0005545273 nova_compute[244644]: 2025-12-04 11:01:01.246 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  4 06:01:01 np0005545273 nova_compute[244644]: 2025-12-04 11:01:01.248 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4932MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  4 06:01:01 np0005545273 nova_compute[244644]: 2025-12-04 11:01:01.249 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 06:01:01 np0005545273 nova_compute[244644]: 2025-12-04 11:01:01.249 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 06:01:01 np0005545273 nova_compute[244644]: 2025-12-04 11:01:01.940 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  4 06:01:01 np0005545273 nova_compute[244644]: 2025-12-04 11:01:01.940 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  4 06:01:01 np0005545273 nova_compute[244644]: 2025-12-04 11:01:01.958 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 06:01:01 np0005545273 podman[273953]: 2025-12-04 11:01:01.99954063 +0000 UTC m=+0.100872248 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd)
Dec  4 06:01:02 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1554: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:01:02 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 06:01:02 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3862235050' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 06:01:02 np0005545273 nova_compute[244644]: 2025-12-04 11:01:02.537 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 06:01:02 np0005545273 nova_compute[244644]: 2025-12-04 11:01:02.543 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  4 06:01:02 np0005545273 nova_compute[244644]: 2025-12-04 11:01:02.579 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  4 06:01:02 np0005545273 nova_compute[244644]: 2025-12-04 11:01:02.581 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  4 06:01:02 np0005545273 nova_compute[244644]: 2025-12-04 11:01:02.582 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.333s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 06:01:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 06:01:04 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1555: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:01:06 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1556: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:01:06 np0005545273 nova_compute[244644]: 2025-12-04 11:01:06.578 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 06:01:06 np0005545273 nova_compute[244644]: 2025-12-04 11:01:06.578 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 06:01:06 np0005545273 nova_compute[244644]: 2025-12-04 11:01:06.578 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 06:01:06 np0005545273 nova_compute[244644]: 2025-12-04 11:01:06.579 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 06:01:07 np0005545273 nova_compute[244644]: 2025-12-04 11:01:07.337 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 06:01:07 np0005545273 nova_compute[244644]: 2025-12-04 11:01:07.338 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  4 06:01:08 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1557: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:01:09 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 06:01:10 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1558: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:01:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  4 06:01:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1002866429' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec  4 06:01:11 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  4 06:01:11 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1002866429' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec  4 06:01:12 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1559: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:01:12 np0005545273 podman[274009]: 2025-12-04 11:01:12.94507568 +0000 UTC m=+0.050620354 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  4 06:01:12 np0005545273 podman[274008]: 2025-12-04 11:01:12.97440608 +0000 UTC m=+0.084025754 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec  4 06:01:14 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 06:01:14 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1560: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:01:16 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1561: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:01:18 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1562: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:01:19 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 06:01:20 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1563: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:01:22 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1564: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:01:24 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 06:01:24 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1565: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:01:26 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1566: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:01:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Optimize plan auto_2025-12-04_11:01:26
Dec  4 06:01:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  4 06:01:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] do_upmap
Dec  4 06:01:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] pools ['vms', 'images', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'backups', 'default.rgw.meta', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta']
Dec  4 06:01:26 np0005545273 ceph-mgr[75651]: [balancer INFO root] prepared 0/10 upmap changes
Dec  4 06:01:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 06:01:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 06:01:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  4 06:01:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 06:01:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 06:01:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 06:01:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 06:01:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  4 06:01:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  4 06:01:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  4 06:01:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  4 06:01:28 np0005545273 ceph-mgr[75651]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  4 06:01:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 06:01:28 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 06:01:28 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1567: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:01:29 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 06:01:29 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 06:01:29 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 06:01:30 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1568: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:01:32 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 06:01:32 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 06:01:32 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  4 06:01:32 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 06:01:32 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  4 06:01:32 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1569: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:01:32 np0005545273 podman[274138]: 2025-12-04 11:01:32.948890701 +0000 UTC m=+0.056756474 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Dec  4 06:01:33 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 06:01:33 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  4 06:01:33 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  4 06:01:33 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  4 06:01:33 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 06:01:33 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 06:01:33 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 06:01:33 np0005545273 podman[274218]: 2025-12-04 11:01:33.552367842 +0000 UTC m=+0.042379121 container create 2365dd77d27f53a16abe75f5e68a10c5e55f58b695aec2c998a3f9d8c24959e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_benz, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  4 06:01:33 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  4 06:01:33 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 06:01:33 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  4 06:01:33 np0005545273 systemd[1]: Started libpod-conmon-2365dd77d27f53a16abe75f5e68a10c5e55f58b695aec2c998a3f9d8c24959e7.scope.
Dec  4 06:01:33 np0005545273 systemd[1]: Started libcrun container.
Dec  4 06:01:33 np0005545273 podman[274218]: 2025-12-04 11:01:33.534433703 +0000 UTC m=+0.024445002 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 06:01:33 np0005545273 podman[274218]: 2025-12-04 11:01:33.640279342 +0000 UTC m=+0.130290621 container init 2365dd77d27f53a16abe75f5e68a10c5e55f58b695aec2c998a3f9d8c24959e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_benz, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 06:01:33 np0005545273 podman[274218]: 2025-12-04 11:01:33.647216222 +0000 UTC m=+0.137227491 container start 2365dd77d27f53a16abe75f5e68a10c5e55f58b695aec2c998a3f9d8c24959e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_benz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 06:01:33 np0005545273 podman[274218]: 2025-12-04 11:01:33.652136234 +0000 UTC m=+0.142147533 container attach 2365dd77d27f53a16abe75f5e68a10c5e55f58b695aec2c998a3f9d8c24959e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_benz, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 06:01:33 np0005545273 epic_benz[274234]: 167 167
Dec  4 06:01:33 np0005545273 systemd[1]: libpod-2365dd77d27f53a16abe75f5e68a10c5e55f58b695aec2c998a3f9d8c24959e7.scope: Deactivated successfully.
Dec  4 06:01:33 np0005545273 podman[274218]: 2025-12-04 11:01:33.65364087 +0000 UTC m=+0.143652159 container died 2365dd77d27f53a16abe75f5e68a10c5e55f58b695aec2c998a3f9d8c24959e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_benz, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  4 06:01:33 np0005545273 systemd[1]: var-lib-containers-storage-overlay-bca934bbc7694457922d374442947a315e481a35511f6fd2deb186a91006af53-merged.mount: Deactivated successfully.
Dec  4 06:01:33 np0005545273 podman[274218]: 2025-12-04 11:01:33.696628615 +0000 UTC m=+0.186639874 container remove 2365dd77d27f53a16abe75f5e68a10c5e55f58b695aec2c998a3f9d8c24959e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  4 06:01:33 np0005545273 systemd[1]: libpod-conmon-2365dd77d27f53a16abe75f5e68a10c5e55f58b695aec2c998a3f9d8c24959e7.scope: Deactivated successfully.
Dec  4 06:01:33 np0005545273 podman[274258]: 2025-12-04 11:01:33.835848045 +0000 UTC m=+0.022479543 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 06:01:34 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 06:01:34 np0005545273 podman[274258]: 2025-12-04 11:01:34.393818829 +0000 UTC m=+0.580450297 container create 9d48c70bcffd1c35db284a39beb082e280d15eebaeaaedb46fb9f74e0605b807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Dec  4 06:01:34 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1570: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:01:34 np0005545273 systemd[1]: Started libpod-conmon-9d48c70bcffd1c35db284a39beb082e280d15eebaeaaedb46fb9f74e0605b807.scope.
Dec  4 06:01:34 np0005545273 systemd[1]: Started libcrun container.
Dec  4 06:01:34 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59c53a024315f6affa21cd5990d7a8fb5431a44e83f094dda48602e8881bce32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 06:01:34 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59c53a024315f6affa21cd5990d7a8fb5431a44e83f094dda48602e8881bce32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 06:01:34 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59c53a024315f6affa21cd5990d7a8fb5431a44e83f094dda48602e8881bce32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 06:01:34 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59c53a024315f6affa21cd5990d7a8fb5431a44e83f094dda48602e8881bce32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 06:01:34 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59c53a024315f6affa21cd5990d7a8fb5431a44e83f094dda48602e8881bce32/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  4 06:01:34 np0005545273 podman[274258]: 2025-12-04 11:01:34.834807039 +0000 UTC m=+1.021438537 container init 9d48c70bcffd1c35db284a39beb082e280d15eebaeaaedb46fb9f74e0605b807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_clarke, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 06:01:34 np0005545273 podman[274258]: 2025-12-04 11:01:34.841496893 +0000 UTC m=+1.028128371 container start 9d48c70bcffd1c35db284a39beb082e280d15eebaeaaedb46fb9f74e0605b807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_clarke, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  4 06:01:34 np0005545273 podman[274258]: 2025-12-04 11:01:34.850666019 +0000 UTC m=+1.037297497 container attach 9d48c70bcffd1c35db284a39beb082e280d15eebaeaaedb46fb9f74e0605b807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_clarke, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 06:01:35 np0005545273 sleepy_clarke[274275]: --> passed data devices: 0 physical, 3 LVM
Dec  4 06:01:35 np0005545273 sleepy_clarke[274275]: --> All data devices are unavailable
Dec  4 06:01:35 np0005545273 systemd[1]: libpod-9d48c70bcffd1c35db284a39beb082e280d15eebaeaaedb46fb9f74e0605b807.scope: Deactivated successfully.
Dec  4 06:01:35 np0005545273 podman[274258]: 2025-12-04 11:01:35.33928386 +0000 UTC m=+1.525915338 container died 9d48c70bcffd1c35db284a39beb082e280d15eebaeaaedb46fb9f74e0605b807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec  4 06:01:35 np0005545273 systemd[1]: var-lib-containers-storage-overlay-59c53a024315f6affa21cd5990d7a8fb5431a44e83f094dda48602e8881bce32-merged.mount: Deactivated successfully.
Dec  4 06:01:35 np0005545273 podman[274258]: 2025-12-04 11:01:35.75464607 +0000 UTC m=+1.941277548 container remove 9d48c70bcffd1c35db284a39beb082e280d15eebaeaaedb46fb9f74e0605b807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_clarke, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec  4 06:01:35 np0005545273 systemd[1]: libpod-conmon-9d48c70bcffd1c35db284a39beb082e280d15eebaeaaedb46fb9f74e0605b807.scope: Deactivated successfully.
Dec  4 06:01:36 np0005545273 podman[274368]: 2025-12-04 11:01:36.167540411 +0000 UTC m=+0.023045457 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 06:01:36 np0005545273 podman[274368]: 2025-12-04 11:01:36.505241505 +0000 UTC m=+0.360746541 container create 168945af4e589d3747d0a9639f3160f85341c8d8620936129a177dfb7b1cfedb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_feistel, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 06:01:36 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1571: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:01:36 np0005545273 systemd[1]: Started libpod-conmon-168945af4e589d3747d0a9639f3160f85341c8d8620936129a177dfb7b1cfedb.scope.
Dec  4 06:01:36 np0005545273 systemd[1]: Started libcrun container.
Dec  4 06:01:37 np0005545273 podman[274368]: 2025-12-04 11:01:37.174610864 +0000 UTC m=+1.030115910 container init 168945af4e589d3747d0a9639f3160f85341c8d8620936129a177dfb7b1cfedb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 06:01:37 np0005545273 podman[274368]: 2025-12-04 11:01:37.181336679 +0000 UTC m=+1.036841695 container start 168945af4e589d3747d0a9639f3160f85341c8d8620936129a177dfb7b1cfedb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 06:01:37 np0005545273 vibrant_feistel[274384]: 167 167
Dec  4 06:01:37 np0005545273 podman[274368]: 2025-12-04 11:01:37.185706796 +0000 UTC m=+1.041211812 container attach 168945af4e589d3747d0a9639f3160f85341c8d8620936129a177dfb7b1cfedb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 06:01:37 np0005545273 systemd[1]: libpod-168945af4e589d3747d0a9639f3160f85341c8d8620936129a177dfb7b1cfedb.scope: Deactivated successfully.
Dec  4 06:01:37 np0005545273 podman[274368]: 2025-12-04 11:01:37.186389694 +0000 UTC m=+1.041894710 container died 168945af4e589d3747d0a9639f3160f85341c8d8620936129a177dfb7b1cfedb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 06:01:37 np0005545273 systemd[1]: var-lib-containers-storage-overlay-3b2e2461f6c03cd9a8d4856abaaa972b5cac8c4e40ce387a31c5db22d9e625bd-merged.mount: Deactivated successfully.
Dec  4 06:01:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] _maybe_adjust
Dec  4 06:01:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 06:01:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  4 06:01:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 06:01:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 06:01:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 06:01:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 06:01:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 06:01:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 06:01:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 06:01:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006660929475746917 of space, bias 1.0, pg target 0.19982788427240752 quantized to 32 (current 32)
Dec  4 06:01:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 06:01:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0006150863533444786 of space, bias 4.0, pg target 0.7381036240133744 quantized to 16 (current 32)
Dec  4 06:01:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 06:01:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Dec  4 06:01:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 06:01:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  4 06:01:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 06:01:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec  4 06:01:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 06:01:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  4 06:01:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  4 06:01:37 np0005545273 ceph-mgr[75651]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  4 06:01:38 np0005545273 podman[274368]: 2025-12-04 11:01:38.296814145 +0000 UTC m=+2.152319151 container remove 168945af4e589d3747d0a9639f3160f85341c8d8620936129a177dfb7b1cfedb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_feistel, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  4 06:01:38 np0005545273 systemd[1]: libpod-conmon-168945af4e589d3747d0a9639f3160f85341c8d8620936129a177dfb7b1cfedb.scope: Deactivated successfully.
Dec  4 06:01:38 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1572: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:01:38 np0005545273 podman[274407]: 2025-12-04 11:01:38.555739764 +0000 UTC m=+0.105310297 container create 69466c25c0194b755d1b17355eecb795ebdad6ec8ada059c7644895ed31530ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 06:01:38 np0005545273 podman[274407]: 2025-12-04 11:01:38.483130991 +0000 UTC m=+0.032701554 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 06:01:38 np0005545273 systemd[1]: Started libpod-conmon-69466c25c0194b755d1b17355eecb795ebdad6ec8ada059c7644895ed31530ab.scope.
Dec  4 06:01:38 np0005545273 systemd[1]: Started libcrun container.
Dec  4 06:01:38 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b11911306110896dc38f08e2ebd7c2ccf19b5839ee82b229536fdb3b9038a80/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 06:01:38 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b11911306110896dc38f08e2ebd7c2ccf19b5839ee82b229536fdb3b9038a80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 06:01:38 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b11911306110896dc38f08e2ebd7c2ccf19b5839ee82b229536fdb3b9038a80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 06:01:38 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b11911306110896dc38f08e2ebd7c2ccf19b5839ee82b229536fdb3b9038a80/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 06:01:39 np0005545273 podman[274407]: 2025-12-04 11:01:39.096590818 +0000 UTC m=+0.646161371 container init 69466c25c0194b755d1b17355eecb795ebdad6ec8ada059c7644895ed31530ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 06:01:39 np0005545273 podman[274407]: 2025-12-04 11:01:39.104911802 +0000 UTC m=+0.654482335 container start 69466c25c0194b755d1b17355eecb795ebdad6ec8ada059c7644895ed31530ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_napier, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  4 06:01:39 np0005545273 podman[274407]: 2025-12-04 11:01:39.11134356 +0000 UTC m=+0.660914123 container attach 69466c25c0194b755d1b17355eecb795ebdad6ec8ada059c7644895ed31530ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_napier, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  4 06:01:39 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 06:01:39 np0005545273 awesome_napier[274424]: {
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:    "0": [
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:        {
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            "devices": [
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "/dev/loop3"
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            ],
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            "lv_name": "ceph_lv0",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            "lv_size": "21470642176",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d6d34217-6607-43be-80be-ae04b730142c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            "lv_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            "name": "ceph_lv0",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            "tags": {
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.block_uuid": "7jzBP6-jMM2-6XKe-unuI-rzR1-AtjO-2s4jEW",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.cephx_lockbox_secret": "",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.cluster_name": "ceph",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.crush_device_class": "",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.encrypted": "0",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.objectstore": "bluestore",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.osd_fsid": "d6d34217-6607-43be-80be-ae04b730142c",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.osd_id": "0",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.type": "block",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.vdo": "0",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.with_tpm": "0"
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            },
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            "type": "block",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            "vg_name": "ceph_vg0"
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:        }
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:    ],
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:    "1": [
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:        {
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            "devices": [
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "/dev/loop4"
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            ],
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            "lv_name": "ceph_lv1",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            "lv_size": "21470642176",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=8cc1daa3-82be-4bdc-8e62-fc5001daf8bb,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            "lv_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            "name": "ceph_lv1",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            "tags": {
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.block_uuid": "8syywM-8khC-bDAi-rzPJ-6hh0-btzX-ICSyg5",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.cephx_lockbox_secret": "",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.cluster_name": "ceph",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.crush_device_class": "",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.encrypted": "0",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.objectstore": "bluestore",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.osd_fsid": "8cc1daa3-82be-4bdc-8e62-fc5001daf8bb",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.osd_id": "1",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.type": "block",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.vdo": "0",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.with_tpm": "0"
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            },
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            "type": "block",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            "vg_name": "ceph_vg1"
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:        }
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:    ],
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:    "2": [
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:        {
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            "devices": [
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "/dev/loop5"
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            ],
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            "lv_name": "ceph_lv2",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            "lv_size": "21470642176",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=2ee6d319-dca2-4c06-9365-2240b94f11cb,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            "lv_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            "name": "ceph_lv2",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            "tags": {
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.block_uuid": "1oxdH8-UbBC-4eTY-vNZU-ZPmN-0Efi-oCHXhk",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.cephx_lockbox_secret": "",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.cluster_fsid": "f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.cluster_name": "ceph",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.crush_device_class": "",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.encrypted": "0",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.objectstore": "bluestore",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.osd_fsid": "2ee6d319-dca2-4c06-9365-2240b94f11cb",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.osd_id": "2",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.type": "block",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.vdo": "0",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:                "ceph.with_tpm": "0"
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            },
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            "type": "block",
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:            "vg_name": "ceph_vg2"
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:        }
Dec  4 06:01:39 np0005545273 awesome_napier[274424]:    ]
Dec  4 06:01:39 np0005545273 awesome_napier[274424]: }
Dec  4 06:01:39 np0005545273 systemd[1]: libpod-69466c25c0194b755d1b17355eecb795ebdad6ec8ada059c7644895ed31530ab.scope: Deactivated successfully.
Dec  4 06:01:39 np0005545273 podman[274407]: 2025-12-04 11:01:39.450145121 +0000 UTC m=+0.999715654 container died 69466c25c0194b755d1b17355eecb795ebdad6ec8ada059c7644895ed31530ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  4 06:01:39 np0005545273 systemd[1]: var-lib-containers-storage-overlay-0b11911306110896dc38f08e2ebd7c2ccf19b5839ee82b229536fdb3b9038a80-merged.mount: Deactivated successfully.
Dec  4 06:01:39 np0005545273 podman[274407]: 2025-12-04 11:01:39.791224008 +0000 UTC m=+1.340794541 container remove 69466c25c0194b755d1b17355eecb795ebdad6ec8ada059c7644895ed31530ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_napier, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec  4 06:01:39 np0005545273 systemd[1]: libpod-conmon-69466c25c0194b755d1b17355eecb795ebdad6ec8ada059c7644895ed31530ab.scope: Deactivated successfully.
Dec  4 06:01:40 np0005545273 podman[274508]: 2025-12-04 11:01:40.257802617 +0000 UTC m=+0.036412045 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 06:01:40 np0005545273 podman[274508]: 2025-12-04 11:01:40.429416602 +0000 UTC m=+0.208026000 container create 3a4fea3ab54f39e0934e86bf754c40830cf2c03b135e19ed9a07d58b19945380 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_thompson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 06:01:40 np0005545273 systemd[1]: Started libpod-conmon-3a4fea3ab54f39e0934e86bf754c40830cf2c03b135e19ed9a07d58b19945380.scope.
Dec  4 06:01:40 np0005545273 systemd[1]: Started libcrun container.
Dec  4 06:01:40 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1573: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:01:40 np0005545273 podman[274508]: 2025-12-04 11:01:40.619515461 +0000 UTC m=+0.398124879 container init 3a4fea3ab54f39e0934e86bf754c40830cf2c03b135e19ed9a07d58b19945380 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 06:01:40 np0005545273 podman[274508]: 2025-12-04 11:01:40.627055945 +0000 UTC m=+0.405665343 container start 3a4fea3ab54f39e0934e86bf754c40830cf2c03b135e19ed9a07d58b19945380 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_thompson, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  4 06:01:40 np0005545273 podman[274508]: 2025-12-04 11:01:40.631068784 +0000 UTC m=+0.409678182 container attach 3a4fea3ab54f39e0934e86bf754c40830cf2c03b135e19ed9a07d58b19945380 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_thompson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 06:01:40 np0005545273 dreamy_thompson[274524]: 167 167
Dec  4 06:01:40 np0005545273 systemd[1]: libpod-3a4fea3ab54f39e0934e86bf754c40830cf2c03b135e19ed9a07d58b19945380.scope: Deactivated successfully.
Dec  4 06:01:40 np0005545273 podman[274508]: 2025-12-04 11:01:40.633892853 +0000 UTC m=+0.412502251 container died 3a4fea3ab54f39e0934e86bf754c40830cf2c03b135e19ed9a07d58b19945380 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_thompson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec  4 06:01:40 np0005545273 systemd[1]: var-lib-containers-storage-overlay-64f4f232a10c8a75b4adae45ba541a76ca20fd23c002ea3cee1806cd9709c88a-merged.mount: Deactivated successfully.
Dec  4 06:01:40 np0005545273 podman[274508]: 2025-12-04 11:01:40.674259195 +0000 UTC m=+0.452868593 container remove 3a4fea3ab54f39e0934e86bf754c40830cf2c03b135e19ed9a07d58b19945380 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_thompson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec  4 06:01:40 np0005545273 systemd[1]: libpod-conmon-3a4fea3ab54f39e0934e86bf754c40830cf2c03b135e19ed9a07d58b19945380.scope: Deactivated successfully.
Dec  4 06:01:40 np0005545273 podman[274548]: 2025-12-04 11:01:40.855449604 +0000 UTC m=+0.049531367 container create 763a8ce75ce2b6ac80ad688c6da9f9df8f68976efd946a17443666dd2675ac68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_hellman, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 06:01:40 np0005545273 systemd[1]: Started libpod-conmon-763a8ce75ce2b6ac80ad688c6da9f9df8f68976efd946a17443666dd2675ac68.scope.
Dec  4 06:01:40 np0005545273 podman[274548]: 2025-12-04 11:01:40.837157555 +0000 UTC m=+0.031239338 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  4 06:01:40 np0005545273 systemd[1]: Started libcrun container.
Dec  4 06:01:40 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cce8da47a33ea228497e4f8e8c52399b7664ca8b73795d35b43f1d2b6b54dffc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  4 06:01:40 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cce8da47a33ea228497e4f8e8c52399b7664ca8b73795d35b43f1d2b6b54dffc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  4 06:01:40 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cce8da47a33ea228497e4f8e8c52399b7664ca8b73795d35b43f1d2b6b54dffc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  4 06:01:40 np0005545273 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cce8da47a33ea228497e4f8e8c52399b7664ca8b73795d35b43f1d2b6b54dffc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  4 06:01:40 np0005545273 podman[274548]: 2025-12-04 11:01:40.962396222 +0000 UTC m=+0.156478005 container init 763a8ce75ce2b6ac80ad688c6da9f9df8f68976efd946a17443666dd2675ac68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  4 06:01:40 np0005545273 podman[274548]: 2025-12-04 11:01:40.969022354 +0000 UTC m=+0.163104117 container start 763a8ce75ce2b6ac80ad688c6da9f9df8f68976efd946a17443666dd2675ac68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_hellman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  4 06:01:40 np0005545273 podman[274548]: 2025-12-04 11:01:40.973585957 +0000 UTC m=+0.167667760 container attach 763a8ce75ce2b6ac80ad688c6da9f9df8f68976efd946a17443666dd2675ac68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_hellman, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 06:01:41 np0005545273 lvm[274644]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 06:01:41 np0005545273 lvm[274643]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 06:01:41 np0005545273 lvm[274644]: VG ceph_vg1 finished
Dec  4 06:01:41 np0005545273 lvm[274643]: VG ceph_vg0 finished
Dec  4 06:01:41 np0005545273 lvm[274646]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 06:01:41 np0005545273 lvm[274646]: VG ceph_vg2 finished
Dec  4 06:01:41 np0005545273 stupefied_hellman[274565]: {}
Dec  4 06:01:41 np0005545273 systemd[1]: libpod-763a8ce75ce2b6ac80ad688c6da9f9df8f68976efd946a17443666dd2675ac68.scope: Deactivated successfully.
Dec  4 06:01:41 np0005545273 podman[274548]: 2025-12-04 11:01:41.903186647 +0000 UTC m=+1.097268430 container died 763a8ce75ce2b6ac80ad688c6da9f9df8f68976efd946a17443666dd2675ac68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  4 06:01:41 np0005545273 systemd[1]: libpod-763a8ce75ce2b6ac80ad688c6da9f9df8f68976efd946a17443666dd2675ac68.scope: Consumed 1.569s CPU time.
Dec  4 06:01:41 np0005545273 systemd[1]: var-lib-containers-storage-overlay-cce8da47a33ea228497e4f8e8c52399b7664ca8b73795d35b43f1d2b6b54dffc-merged.mount: Deactivated successfully.
Dec  4 06:01:41 np0005545273 podman[274548]: 2025-12-04 11:01:41.94971166 +0000 UTC m=+1.143793453 container remove 763a8ce75ce2b6ac80ad688c6da9f9df8f68976efd946a17443666dd2675ac68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_hellman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec  4 06:01:41 np0005545273 systemd[1]: libpod-conmon-763a8ce75ce2b6ac80ad688c6da9f9df8f68976efd946a17443666dd2675ac68.scope: Deactivated successfully.
Dec  4 06:01:42 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  4 06:01:42 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 06:01:42 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  4 06:01:42 np0005545273 ceph-mon[75358]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 06:01:42 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1574: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:01:43 np0005545273 podman[274685]: 2025-12-04 11:01:43.969145946 +0000 UTC m=+0.061802469 container health_status 292c494d23f0ef95fdfee26503415172c30fc3b0e43d4ae310d27f6d00e87567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec  4 06:01:44 np0005545273 podman[274684]: 2025-12-04 11:01:44.00226309 +0000 UTC m=+0.098100570 container health_status 0aab161699fc699b0b9cbdbba2f7c49d29da0eb5c18a128b108769a7f37bbe06 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec  4 06:01:44 np0005545273 systemd-logind[798]: New session 55 of user zuul.
Dec  4 06:01:44 np0005545273 systemd[1]: Started Session 55 of User zuul.
Dec  4 06:01:44 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 06:01:44 np0005545273 ceph-mon[75358]: from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' 
Dec  4 06:01:44 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1575: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:01:44 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 06:01:46 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1576: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:01:46 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14844 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 06:01:47 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14846 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 06:01:48 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1577: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:01:48 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Dec  4 06:01:48 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/53782528' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec  4 06:01:49 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 06:01:50 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1578: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:01:51 np0005545273 ovs-vsctl[275015]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec  4 06:01:52 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1579: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:01:52 np0005545273 virtqemud[244380]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec  4 06:01:52 np0005545273 virtqemud[244380]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec  4 06:01:52 np0005545273 virtqemud[244380]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec  4 06:01:53 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: cache status {prefix=cache status} (starting...)
Dec  4 06:01:53 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: client ls {prefix=client ls} (starting...)
Dec  4 06:01:53 np0005545273 lvm[275352]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  4 06:01:53 np0005545273 lvm[275351]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  4 06:01:53 np0005545273 lvm[275352]: VG ceph_vg1 finished
Dec  4 06:01:53 np0005545273 lvm[275351]: VG ceph_vg2 finished
Dec  4 06:01:53 np0005545273 lvm[275389]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  4 06:01:53 np0005545273 lvm[275389]: VG ceph_vg0 finished
Dec  4 06:01:53 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14850 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 06:01:54 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: damage ls {prefix=damage ls} (starting...)
Dec  4 06:01:54 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: dump loads {prefix=dump loads} (starting...)
Dec  4 06:01:54 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14852 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 06:01:54 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec  4 06:01:54 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1580: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail
Dec  4 06:01:54 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec  4 06:01:54 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec  4 06:01:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Dec  4 06:01:54 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3169780587' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec  4 06:01:54 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 06:01:54 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14856 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 06:01:54 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec  4 06:01:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 11:01:54.933 156095 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 06:01:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 11:01:54.935 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 06:01:54 np0005545273 ovn_metadata_agent[156090]: 2025-12-04 11:01:54.935 156095 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 06:01:55 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec  4 06:01:55 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  4 06:01:55 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2130012547' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  4 06:01:55 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec  4 06:01:55 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14860 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 06:01:55 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T11:01:55.303+0000 7f8454576640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  4 06:01:55 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  4 06:01:55 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: ops {prefix=ops} (starting...)
Dec  4 06:01:55 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Dec  4 06:01:55 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3450865011' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Dec  4 06:01:55 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Dec  4 06:01:55 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3856620364' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Dec  4 06:01:56 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: session ls {prefix=session ls} (starting...)
Dec  4 06:01:56 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Dec  4 06:01:56 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/948930561' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Dec  4 06:01:56 np0005545273 ceph-mds[96299]: mds.cephfs.compute-0.zcbnoq asok_command: status {prefix=status} (starting...)
Dec  4 06:01:56 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec  4 06:01:56 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3851079445' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec  4 06:01:56 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1581: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s
Dec  4 06:01:56 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14870 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 06:01:56 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec  4 06:01:56 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1507816033' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec  4 06:01:57 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14874 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 06:01:57 np0005545273 nova_compute[244644]: 2025-12-04 11:01:57.339 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 06:01:57 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec  4 06:01:57 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2683046066' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec  4 06:01:57 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Dec  4 06:01:57 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1497263831' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec  4 06:01:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 06:01:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 06:01:58 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  4 06:01:58 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2054515866' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec  4 06:01:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 06:01:58 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 06:01:58 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Dec  4 06:01:58 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3733613048' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Dec  4 06:01:58 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1582: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec  4 06:01:58 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec  4 06:01:58 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1120272066' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec  4 06:01:58 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14886 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 06:01:58 np0005545273 ceph-mgr[75651]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  4 06:01:58 np0005545273 ceph-f62c0b6f-1e98-5ab5-93ea-2f0cbb6d097d-mgr-compute-0-iwufnj[75647]: 2025-12-04T11:01:58.959+0000 7f8454576640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  4 06:01:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec  4 06:01:59 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/776949453' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec  4 06:01:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Dec  4 06:01:59 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2506140270' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Dec  4 06:01:59 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 06:01:59 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] scanning for idle connections..
Dec  4 06:01:59 np0005545273 ceph-mgr[75651]: [volumes INFO mgr_util] cleaning up connections: []
Dec  4 06:01:59 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14892 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 1638400 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 1638400 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 1638400 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 1638400 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 1630208 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 1630208 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 1630208 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 1630208 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 1630208 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 1630208 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 1622016 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 1597440 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 1597440 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 1597440 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 1597440 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 1597440 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1613824 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 1597440 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1581056 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 1572864 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 1556480 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 1548288 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 1548288 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 1548288 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 1540096 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1523712 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: mgrc ms_handle_reset ms_handle_reset con 0x55c0a3a34000
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/762197634
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: mgrc handle_mgr_configure stats_period=5
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1302528 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1294336 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1286144 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1277952 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946151 data_alloc: 218103808 data_used: 1682
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 1269760 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 1269760 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 299.998474121s of 300.141143799s, submitted: 90
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 1024000 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 958464 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 950272 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 950272 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 950272 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 925696 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 917504 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 901120 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 901120 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 901120 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 901120 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 901120 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 892928 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 892928 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 892928 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 892928 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 884736 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 868352 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 860160 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 843776 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 843776 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 835584 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 827392 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 827392 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 827392 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 827392 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 802816 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 802816 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 802816 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 802816 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 802816 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 794624 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 794624 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 786432 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 778240 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 778240 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 778240 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 778240 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 778240 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 770048 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 770048 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 761856 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 753664 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 745472 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:01:59 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread fragmentation_score=0.000134 took=0.000054s
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 737280 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 729088 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 720896 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5703 writes, 24K keys, 5703 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5703 writes, 902 syncs, 6.32 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c0a1bdfa30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowd
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 688128 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 688128 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 688128 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 688128 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 688128 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 688128 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 688128 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 688128 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 671744 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 671744 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 671744 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 671744 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 671744 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 663552 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 655360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 655360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 655360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 655360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 655360 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 638976 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 638976 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 630784 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 622592 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 299.694915771s of 299.933593750s, submitted: 24
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 573440 heap: 74522624 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 221184 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 212992 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 204800 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 196608 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 188416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 188416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 188416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 188416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 188416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 188416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 188416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 188416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 188416 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 180224 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 172032 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 163840 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 947687 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec3000/0x0/0x4ffc00000, data 0xa9ca4/0x169000, compress 0x0/0x0/0x0, omap 0x117d1, meta 0x2bbe82f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 155648 heap: 75571200 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 117 handle_osd_map epochs [117,118], i have 117, src has [1,118]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 116.999755859s of 117.139999390s, submitted: 90
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75579392 unmapped: 1040384 heap: 76619776 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 118 heartbeat osd_stat(store_statfs(0x4fcebe000/0x0/0x4ffc00000, data 0xab840/0x16c000, compress 0x0/0x0/0x0, omap 0x11ab8, meta 0x2bbe548), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 991232 heap: 76619776 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 119 handle_osd_map epochs [120,120], i have 119, src has [1,120]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 120 ms_handle_reset con 0x55c0a3fee800 session 0x55c0a401ec40
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 9330688 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983916 data_alloc: 218103808 data_used: 3520
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x51efe8/0x5e2000, compress 0x0/0x0/0x0, omap 0x11dfd, meta 0x2bbe203), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 9175040 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 120 handle_osd_map epochs [121,121], i have 120, src has [1,121]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 9134080 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 121 ms_handle_reset con 0x55c0a2e4e400 session 0x55c0a5490380
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 121 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x520bc3/0x5e6000, compress 0x0/0x0/0x0, omap 0x11e1f, meta 0x2bbe1e1), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988171 data_alloc: 218103808 data_used: 4105
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 121 handle_osd_map epochs [121,122], i have 121, src has [1,122]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x520bc3/0x5e6000, compress 0x0/0x0/0x0, omap 0x11e1f, meta 0x2bbe1e1), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990705 data_alloc: 218103808 data_used: 4105
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fca41000/0x0/0x4ffc00000, data 0x522642/0x5e9000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990705 data_alloc: 218103808 data_used: 4105
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fca41000/0x0/0x4ffc00000, data 0x522642/0x5e9000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 9420800 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 22.107776642s of 22.230192184s, submitted: 58
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fca41000/0x0/0x4ffc00000, data 0x522642/0x5e9000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993025 data_alloc: 218103808 data_used: 4105
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 9158656 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Got map version 10
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76791808 unmapped: 9142272 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 9199616 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fca36000/0x0/0x4ffc00000, data 0x52d5d7/0x5f6000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 9199616 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fca35000/0x0/0x4ffc00000, data 0x52e85e/0x5f7000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 9027584 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995567 data_alloc: 218103808 data_used: 4105
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 8994816 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 8994816 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 8945664 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fca29000/0x0/0x4ffc00000, data 0x53abca/0x603000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 8945664 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fca29000/0x0/0x4ffc00000, data 0x53abca/0x603000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 8765440 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.814142227s of 10.116048813s, submitted: 35
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999813 data_alloc: 218103808 data_used: 4105
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 8650752 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Got map version 11
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 8470528 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 8470528 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 8273920 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x558809/0x622000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 8151040 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001261 data_alloc: 218103808 data_used: 4105
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 78987264 unmapped: 6946816 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 79028224 unmapped: 6905856 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 79151104 unmapped: 6782976 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 79249408 unmapped: 6684672 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fc9e7000/0x0/0x4ffc00000, data 0x57b223/0x645000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 79249408 unmapped: 6684672 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.159253120s of 10.109436035s, submitted: 78
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006011 data_alloc: 218103808 data_used: 4105
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 79314944 unmapped: 6619136 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 123 handle_osd_map epochs [123,124], i have 123, src has [1,124]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fc9e5000/0x0/0x4ffc00000, data 0x57def9/0x647000, compress 0x0/0x0/0x0, omap 0x11e6a, meta 0x2bbe196), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 5390336 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 5390336 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 5447680 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 5447680 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fc9d2000/0x0/0x4ffc00000, data 0x590256/0x65a000, compress 0x0/0x0/0x0, omap 0x11ebb, meta 0x2bbe145), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008761 data_alloc: 218103808 data_used: 4105
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 5210112 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 5210112 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fc9c8000/0x0/0x4ffc00000, data 0x59a3b3/0x664000, compress 0x0/0x0/0x0, omap 0x11ebb, meta 0x2bbe145), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 5193728 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 5193728 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 5406720 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009539 data_alloc: 218103808 data_used: 4105
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 5406720 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 5406720 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fc9bc000/0x0/0x4ffc00000, data 0x5a6634/0x670000, compress 0x0/0x0/0x0, omap 0x11ebb, meta 0x2bbe145), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.569020271s of 12.741366386s, submitted: 41
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 5406720 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 5406720 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 5365760 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fc9b3000/0x0/0x4ffc00000, data 0x5af14c/0x679000, compress 0x0/0x0/0x0, omap 0x11ebb, meta 0x2bbe145), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006899 data_alloc: 218103808 data_used: 4105
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80633856 unmapped: 5300224 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80642048 unmapped: 5292032 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80683008 unmapped: 5251072 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 5029888 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 5029888 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011691 data_alloc: 218103808 data_used: 4105
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 5029888 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fc996000/0x0/0x4ffc00000, data 0x5cac75/0x696000, compress 0x0/0x0/0x0, omap 0x11ebb, meta 0x2bbe145), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 5021696 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.882642746s of 10.000583649s, submitted: 38
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 2809856 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 124 handle_osd_map epochs [124,125], i have 124, src has [1,125]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 1728512 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84303872 unmapped: 1630208 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1014373 data_alloc: 218103808 data_used: 4105
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 1556480 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 1417216 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fb7ca000/0x0/0x4ffc00000, data 0x5f65bb/0x6c2000, compress 0x0/0x0/0x0, omap 0x11f29, meta 0x3d5e0d7), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 1245184 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 1245184 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 1056768 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1017817 data_alloc: 218103808 data_used: 4105
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 958464 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 958464 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.733018875s of 10.001555443s, submitted: 91
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 950272 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fb7a8000/0x0/0x4ffc00000, data 0x61635f/0x6e2000, compress 0x0/0x0/0x0, omap 0x11faa, meta 0x3d5e056), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 917504 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 917504 heap: 85934080 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1019529 data_alloc: 218103808 data_used: 4105
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 1957888 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fb78d000/0x0/0x4ffc00000, data 0x633c4e/0x6ff000, compress 0x0/0x0/0x0, omap 0x11faa, meta 0x3d5e056), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 1949696 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 1949696 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fb77b000/0x0/0x4ffc00000, data 0x64574b/0x711000, compress 0x0/0x0/0x0, omap 0x11faa, meta 0x3d5e056), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 1826816 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fb77b000/0x0/0x4ffc00000, data 0x64574b/0x711000, compress 0x0/0x0/0x0, omap 0x11faa, meta 0x3d5e056), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fb778000/0x0/0x4ffc00000, data 0x6480c0/0x714000, compress 0x0/0x0/0x0, omap 0x11faa, meta 0x3d5e056), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 1802240 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1023333 data_alloc: 218103808 data_used: 4105
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1728512 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 1728512 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.848536491s of 10.001356125s, submitted: 29
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1703936 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 1703936 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fb763000/0x0/0x4ffc00000, data 0x65cfdb/0x729000, compress 0x0/0x0/0x0, omap 0x11faa, meta 0x3d5e056), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 1867776 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1022057 data_alloc: 218103808 data_used: 4105
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85172224 unmapped: 1810432 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85172224 unmapped: 1810432 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 1744896 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 401408 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fb72d000/0x0/0x4ffc00000, data 0x68e752/0x75f000, compress 0x0/0x0/0x0, omap 0x11faa, meta 0x3d5e056), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 999424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041309 data_alloc: 218103808 data_used: 4105
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Got map version 12
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 87048192 unmapped: 983040 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 1196032 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.589168549s of 10.002529144s, submitted: 57
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fb703000/0x0/0x4ffc00000, data 0x6ba903/0x789000, compress 0x0/0x0/0x0, omap 0x12010, meta 0x3d5dff0), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 1105920 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 1073152 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 1015808 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038997 data_alloc: 218103808 data_used: 4260
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fb6da000/0x0/0x4ffc00000, data 0x6e2609/0x7b2000, compress 0x0/0x0/0x0, omap 0x12010, meta 0x3d5dff0), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 86712320 unmapped: 1318912 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 86712320 unmapped: 1318912 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fb6bc000/0x0/0x4ffc00000, data 0x6ffebe/0x7d0000, compress 0x0/0x0/0x0, omap 0x12010, meta 0x3d5dff0), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 1179648 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 2170880 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 88195072 unmapped: 884736 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fb669000/0x0/0x4ffc00000, data 0x74b4c0/0x81f000, compress 0x0/0x0/0x0, omap 0x12010, meta 0x3d5dff0), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059855 data_alloc: 218103808 data_used: 4260
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 679936 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 129 handle_osd_map epochs [129,130], i have 129, src has [1,130]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 89464832 unmapped: 663552 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.600092888s of 10.000102997s, submitted: 172
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 1277952 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 90284032 unmapped: 892928 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 89513984 unmapped: 1662976 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063631 data_alloc: 218103808 data_used: 4105
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 89546752 unmapped: 1630208 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fb622000/0x0/0x4ffc00000, data 0x7907b7/0x866000, compress 0x0/0x0/0x0, omap 0x12520, meta 0x3d5dae0), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 90038272 unmapped: 1138688 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fb5e9000/0x0/0x4ffc00000, data 0x7ca62d/0x8a1000, compress 0x0/0x0/0x0, omap 0x12680, meta 0x3d5d980), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 90152960 unmapped: 2072576 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 90251264 unmapped: 1974272 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 90472448 unmapped: 1753088 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075351 data_alloc: 218103808 data_used: 4755
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 90472448 unmapped: 1753088 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 90054656 unmapped: 2170880 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.681773186s of 10.032649994s, submitted: 160
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91299840 unmapped: 925696 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb5b5000/0x0/0x4ffc00000, data 0x7fe7a5/0x8d7000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91357184 unmapped: 868352 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb5b1000/0x0/0x4ffc00000, data 0x802d67/0x8db000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91365376 unmapped: 860160 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076485 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 688128 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 688128 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 688128 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb5b1000/0x0/0x4ffc00000, data 0x803215/0x8db000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91619328 unmapped: 606208 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91619328 unmapped: 606208 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb5a3000/0x0/0x4ffc00000, data 0x8109fa/0x8e9000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075757 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91619328 unmapped: 606208 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb5a3000/0x0/0x4ffc00000, data 0x8109fa/0x8e9000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91734016 unmapped: 491520 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.903874397s of 10.266777992s, submitted: 19
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91742208 unmapped: 483328 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91783168 unmapped: 442368 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91930624 unmapped: 294912 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb580000/0x0/0x4ffc00000, data 0x833caa/0x90c000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1078121 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91357184 unmapped: 868352 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91357184 unmapped: 868352 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91504640 unmapped: 1769472 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 1736704 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91537408 unmapped: 1736704 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb556000/0x0/0x4ffc00000, data 0x85dc84/0x936000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079017 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91545600 unmapped: 1728512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb556000/0x0/0x4ffc00000, data 0x85dc84/0x936000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb556000/0x0/0x4ffc00000, data 0x85dc84/0x936000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91545600 unmapped: 1728512 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb556000/0x0/0x4ffc00000, data 0x85dc84/0x936000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb546000/0x0/0x4ffc00000, data 0x86d830/0x946000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91578368 unmapped: 1695744 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91578368 unmapped: 1695744 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91578368 unmapped: 1695744 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080545 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91578368 unmapped: 1695744 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb546000/0x0/0x4ffc00000, data 0x86d830/0x946000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91578368 unmapped: 1695744 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91578368 unmapped: 1695744 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91578368 unmapped: 1695744 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.004943848s of 16.979648590s, submitted: 22
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91217920 unmapped: 2056192 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb539000/0x0/0x4ffc00000, data 0x87b015/0x953000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1081409 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91455488 unmapped: 1818624 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91455488 unmapped: 1818624 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91111424 unmapped: 2162688 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91111424 unmapped: 2162688 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91111424 unmapped: 2162688 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082297 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91111424 unmapped: 2162688 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb510000/0x0/0x4ffc00000, data 0x8a3c6f/0x97c000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 91111424 unmapped: 2162688 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92340224 unmapped: 933888 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb4f3000/0x0/0x4ffc00000, data 0x8c1190/0x999000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92504064 unmapped: 770048 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92504064 unmapped: 770048 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb4d8000/0x0/0x4ffc00000, data 0x8db7f0/0x9b4000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084117 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.924418449s of 11.061837196s, submitted: 25
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92504064 unmapped: 770048 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92504064 unmapped: 770048 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92512256 unmapped: 1810432 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92512256 unmapped: 1810432 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 136 ms_handle_reset con 0x55c0a5b1a400 session 0x55c0a5b048c0
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93093888 unmapped: 2277376 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 136 ms_handle_reset con 0x55c0a450e000 session 0x55c0a5f96700
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085581 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb4c4000/0x0/0x4ffc00000, data 0x8ee753/0x9c8000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93093888 unmapped: 2277376 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Got map version 13
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb4c4000/0x0/0x4ffc00000, data 0x8ee8b9/0x9c8000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 2195456 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb492000/0x0/0x4ffc00000, data 0x9215f8/0x9fa000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb493000/0x0/0x4ffc00000, data 0x92155d/0x9f9000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090467 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.560784340s of 10.244839668s, submitted: 209
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 2424832 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 2424832 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 92979200 unmapped: 2392064 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089339 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb47a000/0x0/0x4ffc00000, data 0x93a666/0xa12000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087675 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb47a000/0x0/0x4ffc00000, data 0x93a666/0xa12000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.419944763s of 14.577485085s, submitted: 11
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087819 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb47a000/0x0/0x4ffc00000, data 0x93a666/0xa12000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089351 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb479000/0x0/0x4ffc00000, data 0x93a701/0xa13000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089207 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.268507957s of 10.285860062s, submitted: 5
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb479000/0x0/0x4ffc00000, data 0x93a701/0xa13000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088649 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb47a000/0x0/0x4ffc00000, data 0x93a666/0xa12000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090309 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.970705986s of 10.010634422s, submitted: 8
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fb479000/0x0/0x4ffc00000, data 0x93a666/0xa12000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fb475000/0x0/0x4ffc00000, data 0x93c26b/0xa15000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fb475000/0x0/0x4ffc00000, data 0x93c26b/0xa15000, compress 0x0/0x0/0x0, omap 0x129e5, meta 0x3d5d61b), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092127 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93102080 unmapped: 2269184 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93102080 unmapped: 2269184 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dcea/0xa18000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93118464 unmapped: 2252800 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93118464 unmapped: 2252800 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93118464 unmapped: 2252800 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dcea/0xa18000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094885 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 2220032 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 2220032 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.480938911s of 11.537956238s, submitted: 43
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 2220032 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 2220032 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 2220032 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dcea/0xa18000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095029 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dcea/0xa18000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 2220032 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb471000/0x0/0x4ffc00000, data 0x93de27/0xa1a000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb471000/0x0/0x4ffc00000, data 0x93de27/0xa1a000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 2220032 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097421 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 2220032 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.449153900s of 10.473722458s, submitted: 15
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb471000/0x0/0x4ffc00000, data 0x93dcea/0xa18000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2211840 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1096815 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb473000/0x0/0x4ffc00000, data 0x93dd85/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 2203648 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 2203648 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 2203648 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 2203648 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93ddf9/0xa1a000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93184000 unmapped: 2187264 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93ddf9/0xa1a000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098363 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93216768 unmapped: 2154496 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93216768 unmapped: 2154496 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.951936722s of 10.007729530s, submitted: 8
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 2146304 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb471000/0x0/0x4ffc00000, data 0x93ddb5/0xa1a000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 2146304 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097773 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb473000/0x0/0x4ffc00000, data 0x93dd85/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb473000/0x0/0x4ffc00000, data 0x93dd85/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097773 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb473000/0x0/0x4ffc00000, data 0x93dd85/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.995441437s of 11.008138657s, submitted: 5
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb474000/0x0/0x4ffc00000, data 0x93dcea/0xa18000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb474000/0x0/0x4ffc00000, data 0x93dcea/0xa18000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097773 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93233152 unmapped: 2138112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 2367488 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb473000/0x0/0x4ffc00000, data 0x93dd5e/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 2367488 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 2367488 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097645 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dd1a/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 2367488 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 2367488 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93011968 unmapped: 2359296 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dd1a/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93011968 unmapped: 2359296 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93011968 unmapped: 2359296 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097789 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93011968 unmapped: 2359296 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dd5e/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.616669655s of 12.638894081s, submitted: 13
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dd1a/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93011968 unmapped: 2359296 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dd1a/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93011968 unmapped: 2359296 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93011968 unmapped: 2359296 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93020160 unmapped: 2351104 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dd5e/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097805 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93052928 unmapped: 2318336 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93052928 unmapped: 2318336 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dd5e/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097645 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93085696 unmapped: 2285568 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93dd1a/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93118464 unmapped: 2252800 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.879154205s of 10.907876015s, submitted: 15
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 2236416 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 2228224 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 2228224 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099337 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 2228224 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 2228224 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb471000/0x0/0x4ffc00000, data 0x93dde1/0xa1a000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 2195456 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 2203648 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 2203648 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb472000/0x0/0x4ffc00000, data 0x93ddb5/0xa1a000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099177 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 2203648 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 2203648 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 2195456 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 2195456 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.439765930s of 12.481030464s, submitted: 20
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb473000/0x0/0x4ffc00000, data 0x93dd85/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 2195456 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098603 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 2195456 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fb473000/0x0/0x4ffc00000, data 0x93dd85/0xa19000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 2195456 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 2195456 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93184000 unmapped: 2187264 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93184000 unmapped: 2187264 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103199 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93192192 unmapped: 2179072 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fb46e000/0x0/0x4ffc00000, data 0x93f98a/0xa1c000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 93192192 unmapped: 2179072 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94248960 unmapped: 1122304 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 1114112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 1114112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104299 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94257152 unmapped: 1114112 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fb46f000/0x0/0x4ffc00000, data 0x93fa25/0xa1d000, compress 0x0/0x0/0x0, omap 0x12a5b, meta 0x3d5d5a5), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.015718460s of 12.077057838s, submitted: 32
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109053 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fb469000/0x0/0x4ffc00000, data 0x94153f/0xa21000, compress 0x0/0x0/0x0, omap 0x12b53, meta 0x3d5d4ad), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109881 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fb46a000/0x0/0x4ffc00000, data 0x9415da/0xa22000, compress 0x0/0x0/0x0, omap 0x12b53, meta 0x3d5d4ad), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.393504143s of 10.408122063s, submitted: 19
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94265344 unmapped: 1105920 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fb469000/0x0/0x4ffc00000, data 0x941608/0xa22000, compress 0x0/0x0/0x0, omap 0x12b53, meta 0x3d5d4ad), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 1089536 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 1089536 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1112547 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 1089536 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fb468000/0x0/0x4ffc00000, data 0x9416a3/0xa23000, compress 0x0/0x0/0x0, omap 0x12b53, meta 0x3d5d4ad), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94281728 unmapped: 1089536 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94289920 unmapped: 1081344 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94289920 unmapped: 1081344 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94289920 unmapped: 1081344 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111653 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 1073152 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fb46a000/0x0/0x4ffc00000, data 0x941608/0xa22000, compress 0x0/0x0/0x0, omap 0x12b53, meta 0x3d5d4ad), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 1073152 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 1073152 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.508604050s of 11.536386490s, submitted: 13
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 1073152 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94298112 unmapped: 1073152 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1112619 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94306304 unmapped: 1064960 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fb46b000/0x0/0x4ffc00000, data 0x94153f/0xa21000, compress 0x0/0x0/0x0, omap 0x12ca4, meta 0x3d5d35c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94322688 unmapped: 1048576 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94330880 unmapped: 1040384 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94347264 unmapped: 2072576 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94347264 unmapped: 2072576 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121177 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 94388224 unmapped: 2031616 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fb445000/0x0/0x4ffc00000, data 0x964870/0xa47000, compress 0x0/0x0/0x0, omap 0x12ca4, meta 0x3d5d35c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 95461376 unmapped: 958464 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 95674368 unmapped: 745472 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.857433319s of 10.014651299s, submitted: 97
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 95682560 unmapped: 1785856 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 95526912 unmapped: 1941504 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 142 handle_osd_map epochs [142,143], i have 142, src has [1,143]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135499 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 95584256 unmapped: 1884160 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fb3c4000/0x0/0x4ffc00000, data 0x9e0112/0xac6000, compress 0x0/0x0/0x0, omap 0x12cf6, meta 0x3d5d30a), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 95592448 unmapped: 1875968 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fb3b9000/0x0/0x4ffc00000, data 0x9eacc4/0xad1000, compress 0x0/0x0/0x0, omap 0x12cf6, meta 0x3d5d30a), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 1867776 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fb3ac000/0x0/0x4ffc00000, data 0x9f885a/0xade000, compress 0x0/0x0/0x0, omap 0x12cf6, meta 0x3d5d30a), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 96477184 unmapped: 991232 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 96477184 unmapped: 991232 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134715 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 827392 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 96616448 unmapped: 851968 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 96616448 unmapped: 851968 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fb366000/0x0/0x4ffc00000, data 0xa3fab4/0xb26000, compress 0x0/0x0/0x0, omap 0x12cf6, meta 0x3d5d30a), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 96813056 unmapped: 655360 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 96813056 unmapped: 655360 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139427 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.366982460s of 12.445398331s, submitted: 44
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97017856 unmapped: 1499136 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb366000/0x0/0x4ffc00000, data 0xa3fab4/0xb26000, compress 0x0/0x0/0x0, omap 0x12cf6, meta 0x3d5d30a), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97050624 unmapped: 1466368 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97050624 unmapped: 1466368 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97091584 unmapped: 1425408 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97992704 unmapped: 1572864 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149237 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fb2d4000/0x0/0x4ffc00000, data 0xacf83f/0xbb8000, compress 0x0/0x0/0x0, omap 0x12d7b, meta 0x3d5d285), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97992704 unmapped: 1572864 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97992704 unmapped: 1572864 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 145 handle_osd_map epochs [145,146], i have 145, src has [1,146]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97460224 unmapped: 3153920 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fb29d000/0x0/0x4ffc00000, data 0xb04a92/0xbed000, compress 0x0/0x0/0x0, omap 0x12d7b, meta 0x3d5d285), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97435648 unmapped: 3178496 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97435648 unmapped: 3178496 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157693 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97615872 unmapped: 2998272 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.291193008s of 10.482179642s, submitted: 116
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 97976320 unmapped: 2637824 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fb217000/0x0/0x4ffc00000, data 0xb8911b/0xc73000, compress 0x0/0x0/0x0, omap 0x12dfc, meta 0x3d5d204), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 98099200 unmapped: 2514944 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 98541568 unmapped: 3121152 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 99811328 unmapped: 1851392 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174773 data_alloc: 218103808 data_used: 5091
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 99860480 unmapped: 1802240 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 148 handle_osd_map epochs [148,149], i have 148, src has [1,149]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fb19c000/0x0/0x4ffc00000, data 0xc00bf3/0xcee000, compress 0x0/0x0/0x0, omap 0x12ee7, meta 0x3d5d119), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 99565568 unmapped: 2097152 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 99672064 unmapped: 1990656 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fb16d000/0x0/0x4ffc00000, data 0xc32018/0xd1d000, compress 0x0/0x0/0x0, omap 0x12ee7, meta 0x3d5d119), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 100868096 unmapped: 1843200 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 101351424 unmapped: 2408448 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fb147000/0x0/0x4ffc00000, data 0xc594a7/0xd45000, compress 0x0/0x0/0x0, omap 0x12ee7, meta 0x3d5d119), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177381 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 101490688 unmapped: 2269184 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.723609924s of 10.026507378s, submitted: 124
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102588416 unmapped: 1171456 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fb132000/0x0/0x4ffc00000, data 0xc6f06d/0xd59000, compress 0x0/0x0/0x0, omap 0x12ee7, meta 0x3d5d119), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 149 handle_osd_map epochs [149,150], i have 149, src has [1,150]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102744064 unmapped: 2064384 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102744064 unmapped: 2064384 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102744064 unmapped: 2064384 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188495 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102916096 unmapped: 1892352 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102916096 unmapped: 1892352 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fb09d000/0x0/0x4ffc00000, data 0xd007a6/0xdec000, compress 0x0/0x0/0x0, omap 0x18254, meta 0x3d57dac), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102916096 unmapped: 1892352 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102916096 unmapped: 1892352 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fb09d000/0x0/0x4ffc00000, data 0xd007a6/0xdec000, compress 0x0/0x0/0x0, omap 0x18254, meta 0x3d57dac), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103022592 unmapped: 1785856 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192129 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fb067000/0x0/0x4ffc00000, data 0xd37a62/0xe24000, compress 0x0/0x0/0x0, omap 0x18254, meta 0x3d57dac), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103022592 unmapped: 1785856 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103170048 unmapped: 1638400 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.634295464s of 11.785771370s, submitted: 94
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102637568 unmapped: 2170880 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102637568 unmapped: 2170880 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190545 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fb04c000/0x0/0x4ffc00000, data 0xd54161/0xe40000, compress 0x0/0x0/0x0, omap 0x18254, meta 0x3d57dac), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 151 handle_osd_map epochs [152,152], i have 152, src has [1,152]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190921 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.272990227s of 10.301798820s, submitted: 29
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192613 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0xd55bd5/0xe43000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194161 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 2162688 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0xd55c70/0xe44000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195709 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.117662430s of 13.127370834s, submitted: 5
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb045000/0x0/0x4ffc00000, data 0xd55e6c/0xe47000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb044000/0x0/0x4ffc00000, data 0xd55e6e/0xe47000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199109 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb044000/0x0/0x4ffc00000, data 0xd55e6e/0xe47000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 2154496 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 2146304 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 2146304 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 2146304 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198087 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb045000/0x0/0x4ffc00000, data 0xd55dd4/0xe46000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 2146304 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.867134094s of 10.890979767s, submitted: 14
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 2146304 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 2146304 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102563840 unmapped: 2244608 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb047000/0x0/0x4ffc00000, data 0xd55d37/0xe45000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102563840 unmapped: 2244608 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197035 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102563840 unmapped: 2244608 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102563840 unmapped: 2244608 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102563840 unmapped: 2244608 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 8989 writes, 34K keys, 8989 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 8989 writes, 2320 syncs, 3.87 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3286 writes, 10K keys, 3286 commit groups, 1.0 writes per commit group, ingest: 13.71 MB, 0.02 MB/s#012Interval WAL: 3286 writes, 1418 syncs, 2.32 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0xd55c9d/0xe44000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102563840 unmapped: 2244608 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102572032 unmapped: 2236416 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197419 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0xd55c00/0xe43000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102572032 unmapped: 2236416 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 102572032 unmapped: 2236416 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.055023193s of 11.093473434s, submitted: 18
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 1187840 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 1187840 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196669 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196669 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196813 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.888220787s of 14.927642822s, submitted: 4
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 1179648 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196829 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 1171456 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 1171456 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 1171456 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 1171456 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 1171456 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198361 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 1171456 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Got map version 14
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103702528 unmapped: 1105920 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0xd55c4c/0xe43000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.001877785s of 10.011025429s, submitted: 5
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197819 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197835 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.001707077s of 10.005904198s, submitted: 3
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196685 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196813 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.169968605s of 10.174468994s, submitted: 2
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103710720 unmapped: 1097728 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196813 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196685 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103718912 unmapped: 1089536 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196829 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb04a000/0x0/0x4ffc00000, data 0xd55b3a/0xe42000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196829 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.824216843s of 17.850557327s, submitted: 6
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0xd55c02/0xe43000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 1081344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198361 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103268352 unmapped: 1540096 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0xd55c9b/0xe44000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103268352 unmapped: 1540096 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103268352 unmapped: 1540096 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103276544 unmapped: 1531904 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103325696 unmapped: 1482752 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0xd55bd5/0xe43000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199031 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.812626839s of 10.004839897s, submitted: 95
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 2637824 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb049000/0x0/0x4ffc00000, data 0xd55bd5/0xe43000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 2629632 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 2629632 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 2629632 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 2629632 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198585 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 2629632 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0xd55c02/0xe43000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 2629632 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0xd55c02/0xe43000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 2629632 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103227392 unmapped: 2629632 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fb047000/0x0/0x4ffc00000, data 0xd55c9d/0xe44000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103235584 unmapped: 2621440 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201969 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.108821869s of 10.326163292s, submitted: 42
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103243776 unmapped: 2613248 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fb048000/0x0/0x4ffc00000, data 0xd55c9b/0xe44000, compress 0x0/0x0/0x0, omap 0x185e1, meta 0x3d57a1f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 2605056 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0xd578a0/0xe47000, compress 0x0/0x0/0x0, omap 0x1885f, meta 0x3d577a1), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 2605056 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0xd57808/0xe46000, compress 0x0/0x0/0x0, omap 0x1885f, meta 0x3d577a1), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 2605056 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 2605056 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205847 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0xd57808/0xe46000, compress 0x0/0x0/0x0, omap 0x1885f, meta 0x3d577a1), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 2605056 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 2605056 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 2605056 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103251968 unmapped: 2605056 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103268352 unmapped: 2588672 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208173 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fb041000/0x0/0x4ffc00000, data 0xd59285/0xe49000, compress 0x0/0x0/0x0, omap 0x18b4c, meta 0x3d574b4), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103268352 unmapped: 2588672 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.619541168s of 10.691827774s, submitted: 44
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103268352 unmapped: 2588672 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103268352 unmapped: 2588672 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0xd591be/0xe48000, compress 0x0/0x0/0x0, omap 0x18b4c, meta 0x3d574b4), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103268352 unmapped: 2588672 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0xd591be/0xe48000, compress 0x0/0x0/0x0, omap 0x18b4c, meta 0x3d574b4), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103268352 unmapped: 2588672 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207727 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103276544 unmapped: 2580480 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fb044000/0x0/0x4ffc00000, data 0xd591be/0xe48000, compress 0x0/0x0/0x0, omap 0x18b4c, meta 0x3d574b4), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103276544 unmapped: 2580480 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103276544 unmapped: 2580480 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103276544 unmapped: 2580480 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fb044000/0x0/0x4ffc00000, data 0xd591be/0xe48000, compress 0x0/0x0/0x0, omap 0x18b4c, meta 0x3d574b4), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103276544 unmapped: 2580480 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208843 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103276544 unmapped: 2580480 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103276544 unmapped: 2580480 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.884953499s of 11.015766144s, submitted: 6
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103276544 unmapped: 2580480 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 154 ms_handle_reset con 0x55c0a3fef800 session 0x55c0a3818380
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103481344 unmapped: 2375680 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103481344 unmapped: 2375680 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Got map version 15
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0xd59259/0xe49000, compress 0x0/0x0/0x0, omap 0x18b4c, meta 0x3d574b4), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208555 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103563264 unmapped: 2293760 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 155 heartbeat osd_stat(store_statfs(0x4fb043000/0x0/0x4ffc00000, data 0xd59259/0xe49000, compress 0x0/0x0/0x0, omap 0x18b4c, meta 0x3d574b4), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 2285568 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 2285568 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103571456 unmapped: 2285568 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 155 heartbeat osd_stat(store_statfs(0x4fb03e000/0x0/0x4ffc00000, data 0xd5ae5e/0xe4c000, compress 0x0/0x0/0x0, omap 0x18dca, meta 0x3d57236), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 155 handle_osd_map epochs [155,156], i have 155, src has [1,156]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215781 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fb03b000/0x0/0x4ffc00000, data 0xd5c8dd/0xe4f000, compress 0x0/0x0/0x0, omap 0x190e0, meta 0x3d56f20), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fb03b000/0x0/0x4ffc00000, data 0xd5c8dd/0xe4f000, compress 0x0/0x0/0x0, omap 0x190e0, meta 0x3d56f20), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.581476212s of 12.955293655s, submitted: 224
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216753 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fb03e000/0x0/0x4ffc00000, data 0xd5c842/0xe4e000, compress 0x0/0x0/0x0, omap 0x190e0, meta 0x3d56f20), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fb03e000/0x0/0x4ffc00000, data 0xd5c842/0xe4e000, compress 0x0/0x0/0x0, omap 0x190e0, meta 0x3d56f20), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214487 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 2277376 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fb03e000/0x0/0x4ffc00000, data 0xd5c842/0xe4e000, compress 0x0/0x0/0x0, omap 0x190e0, meta 0x3d56f20), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 156 handle_osd_map epochs [157,157], i have 157, src has [1,157]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 2260992 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 2260992 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 2260992 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 2260992 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217965 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fb039000/0x0/0x4ffc00000, data 0xd5e447/0xe51000, compress 0x0/0x0/0x0, omap 0x1935e, meta 0x3d56ca2), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.289826393s of 10.338050842s, submitted: 31
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 2260992 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 2260992 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fb039000/0x0/0x4ffc00000, data 0xd5e447/0xe51000, compress 0x0/0x0/0x0, omap 0x1935e, meta 0x3d56ca2), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 2260992 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fb039000/0x0/0x4ffc00000, data 0xd5e447/0xe51000, compress 0x0/0x0/0x0, omap 0x1935e, meta 0x3d56ca2), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 2260992 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103596032 unmapped: 2260992 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1222287 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fb03a000/0x0/0x4ffc00000, data 0xd5e4e2/0xe52000, compress 0x0/0x0/0x0, omap 0x1935e, meta 0x3d56ca2), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd5ff61/0xe55000, compress 0x0/0x0/0x0, omap 0x19674, meta 0x3d5698c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223979 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fb034000/0x0/0x4ffc00000, data 0xd5fffc/0xe56000, compress 0x0/0x0/0x0, omap 0x19674, meta 0x3d5698c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fb034000/0x0/0x4ffc00000, data 0xd5fffc/0xe56000, compress 0x0/0x0/0x0, omap 0x19674, meta 0x3d5698c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.204831123s of 11.530242920s, submitted: 36
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 158 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd60097/0xe57000, compress 0x0/0x0/0x0, omap 0x19674, meta 0x3d5698c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225925 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103604224 unmapped: 2252800 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 2236416 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 2236416 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 159 heartbeat osd_stat(store_statfs(0x4fb034000/0x0/0x4ffc00000, data 0xd61b66/0xe58000, compress 0x0/0x0/0x0, omap 0x198f2, meta 0x3d5670e), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 2236416 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226641 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 2236416 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 159 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd61acb/0xe57000, compress 0x0/0x0/0x0, omap 0x198f2, meta 0x3d5670e), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 2236416 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 2236416 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 2236416 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 2236416 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226641 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 159 handle_osd_map epochs [159,160], i have 159, src has [1,160]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.297651291s of 13.383323669s, submitted: 59
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103620608 unmapped: 2236416 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 160 handle_osd_map epochs [160,160], i have 160, src has [1,160]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229975 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229975 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229975 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229975 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229975 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229975 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103628800 unmapped: 2228224 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229975 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 38.247886658s of 39.155723572s, submitted: 13
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1230119 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1230119 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb030000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229415 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103636992 unmapped: 2220032 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fb032000/0x0/0x4ffc00000, data 0xd6354a/0xe5a000, compress 0x0/0x0/0x0, omap 0x19c6f, meta 0x3d56391), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.258265495s of 12.265155792s, submitted: 3
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb02d000/0x0/0x4ffc00000, data 0xd6514f/0xe5d000, compress 0x0/0x0/0x0, omap 0x19eee, meta 0x3d56112), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234729 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb02e000/0x0/0x4ffc00000, data 0xd651ea/0xe5e000, compress 0x0/0x0/0x0, omap 0x19eee, meta 0x3d56112), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb02e000/0x0/0x4ffc00000, data 0xd651ea/0xe5e000, compress 0x0/0x0/0x0, omap 0x19eee, meta 0x3d56112), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236625 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02a000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236625 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02a000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 2211840 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.053594589s of 16.302835464s, submitted: 44
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103653376 unmapped: 2203648 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Got map version 16
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103784448 unmapped: 2072576 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb029000/0x0/0x4ffc00000, data 0xd66ee7/0xe63000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103784448 unmapped: 2072576 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240981 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Got map version 17
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103792640 unmapped: 2064384 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103792640 unmapped: 2064384 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238795 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 61.749771118s of 62.368705750s, submitted: 11
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238939 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 2048000 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 162 ms_handle_reset con 0x55c0a450f000 session 0x55c0a3694a80
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 162 ms_handle_reset con 0x55c0a3655400 session 0x55c0a6381500
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104071168 unmapped: 1785856 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Got map version 18
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238635 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fb02c000/0x0/0x4ffc00000, data 0xd66bce/0xe60000, compress 0x0/0x0/0x0, omap 0x1a203, meta 0x3d55dfd), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238779 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.142169952s of 11.714550018s, submitted: 184
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fb027000/0x0/0x4ffc00000, data 0xd687d3/0xe63000, compress 0x0/0x0/0x0, omap 0x1a482, meta 0x3d55b7e), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242433 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fb027000/0x0/0x4ffc00000, data 0xd687d3/0xe63000, compress 0x0/0x0/0x0, omap 0x1a482, meta 0x3d55b7e), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 163 handle_osd_map epochs [163,164], i have 163, src has [1,164]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb027000/0x0/0x4ffc00000, data 0xd687d3/0xe63000, compress 0x0/0x0/0x0, omap 0x1a482, meta 0x3d55b7e), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244903 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb024000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244903 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb024000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.232194901s of 18.285558701s, submitted: 52
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb024000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245047 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb024000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb024000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb024000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245047 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb024000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244343 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb026000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb026000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.992785454s of 15.000616074s, submitted: 4
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244487 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb026000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104079360 unmapped: 1777664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244471 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104013824 unmapped: 1843200 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb026000/0x0/0x4ffc00000, data 0xd6a252/0xe66000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fb00b000/0x0/0x4ffc00000, data 0xd84fc5/0xe81000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104013824 unmapped: 1843200 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104013824 unmapped: 1843200 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104013824 unmapped: 1843200 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fafe9000/0x0/0x4ffc00000, data 0xda606f/0xea3000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.500107765s of 10.000913620s, submitted: 13
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104177664 unmapped: 1679360 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252421 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104243200 unmapped: 1613824 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104243200 unmapped: 1613824 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 1417216 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fafe2000/0x0/0x4ffc00000, data 0xdacba7/0xeaa000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 1417216 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104448000 unmapped: 1409024 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259093 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104521728 unmapped: 1335296 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104873984 unmapped: 2031616 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104873984 unmapped: 2031616 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4faf67000/0x0/0x4ffc00000, data 0xe28f92/0xf25000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 104873984 unmapped: 2031616 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.939127922s of 10.002140999s, submitted: 20
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105013248 unmapped: 1892352 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4faf67000/0x0/0x4ffc00000, data 0xe28f92/0xf25000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255505 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105046016 unmapped: 1859584 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105046016 unmapped: 1859584 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105111552 unmapped: 1794048 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105111552 unmapped: 1794048 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 2793472 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260241 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 164 heartbeat osd_stat(store_statfs(0x4faeee000/0x0/0x4ffc00000, data 0xea172e/0xf9e000, compress 0x0/0x0/0x0, omap 0x1a762, meta 0x3d5589e), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105422848 unmapped: 2531328 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105422848 unmapped: 2531328 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105644032 unmapped: 2310144 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105644032 unmapped: 2310144 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.055247307s of 10.003301620s, submitted: 24
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 164 handle_osd_map epochs [164,165], i have 164, src has [1,165]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105062400 unmapped: 2891776 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264103 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105127936 unmapped: 2826240 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 165 heartbeat osd_stat(store_statfs(0x4faea9000/0x0/0x4ffc00000, data 0xee3272/0xfe1000, compress 0x0/0x0/0x0, omap 0x1a9e1, meta 0x3d5561f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105127936 unmapped: 2826240 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 165 heartbeat osd_stat(store_statfs(0x4faea9000/0x0/0x4ffc00000, data 0xee3272/0xfe1000, compress 0x0/0x0/0x0, omap 0x1a9e1, meta 0x3d5561f), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105127936 unmapped: 2826240 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105127936 unmapped: 2826240 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 2727936 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105259008 unmapped: 2695168 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105259008 unmapped: 2695168 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 2596864 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae8f000/0x0/0x4ffc00000, data 0xefc554/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: do_command 'config diff' '{prefix=config diff}'
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: do_command 'config show' '{prefix=config show}'
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105717760 unmapped: 2236416 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: do_command 'counter dump' '{prefix=counter dump}'
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: do_command 'counter schema' '{prefix=counter schema}'
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105676800 unmapped: 3325952 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105701376 unmapped: 3301376 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267181 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: do_command 'log dump' '{prefix=log dump}'
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105701376 unmapped: 14344192 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: do_command 'perf dump' '{prefix=perf dump}'
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: do_command 'perf schema' '{prefix=perf schema}'
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 77.438171387s of 77.488555908s, submitted: 40
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105947136 unmapped: 14098432 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc66e/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 105947136 unmapped: 14098432 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Got map version 19
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 ms_handle_reset con 0x55c0a2e4e400 session 0x55c0a5f06700
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106168320 unmapped: 13877248 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106168320 unmapped: 13877248 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106168320 unmapped: 13877248 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Got map version 20
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 13860864 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 13860864 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 13860864 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 13860864 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 13860864 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 13860864 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 13860864 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 13860864 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 13860864 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 13860864 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 13860864 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 13860864 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106184704 unmapped: 13860864 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106192896 unmapped: 13852672 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106192896 unmapped: 13852672 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106192896 unmapped: 13852672 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106192896 unmapped: 13852672 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106192896 unmapped: 13852672 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106192896 unmapped: 13852672 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106192896 unmapped: 13852672 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106192896 unmapped: 13852672 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106192896 unmapped: 13852672 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106192896 unmapped: 13852672 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106192896 unmapped: 13852672 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106192896 unmapped: 13852672 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106201088 unmapped: 13844480 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 13836288 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106217472 unmapped: 13828096 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 13819904 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 13819904 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 13819904 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 13819904 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 13819904 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 13819904 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 13811712 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 13811712 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 13811712 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 13811712 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 13811712 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 13811712 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 13811712 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 13811712 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 13811712 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 13811712 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 13811712 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 13811712 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 13811712 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 13811712 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 13811712 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 13811712 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 10K writes, 38K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 2835 syncs, 3.75 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1646 writes, 3739 keys, 1646 commit groups, 1.0 writes per commit group, ingest: 2.53 MB, 0.00 MB/s#012Interval WAL: 1646 writes, 515 syncs, 3.20 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 13811712 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 13811712 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 13811712 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 13803520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106250240 unmapped: 13795328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 13787136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 13787136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 13787136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 13770752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 13770752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 13770752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 13770752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 13770752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265597 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 235.305908203s of 235.333023071s, submitted: 162
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 13770752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 13770752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106274816 unmapped: 13770752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106266624 unmapped: 13778944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 13746176 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107364352 unmapped: 12681216 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107380736 unmapped: 12664832 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 12623872 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 12623872 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 12623872 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 12623872 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 12623872 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 12623872 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 12623872 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 12623872 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107421696 unmapped: 12623872 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107438080 unmapped: 12607488 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107438080 unmapped: 12607488 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107438080 unmapped: 12607488 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107438080 unmapped: 12607488 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 12779520 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 12771328 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 12763136 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Dec  4 06:02:00 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2010535752' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 12754944 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 12746752 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 12738560 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 12738560 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 12738560 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 12738560 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 12738560 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 12738560 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 12738560 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 12738560 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: mgrc ms_handle_reset ms_handle_reset con 0x55c0a5b1a800
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/762197634
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: mgrc handle_mgr_configure stats_period=5
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107429888 unmapped: 12615680 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107413504 unmapped: 12632064 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 298.117980957s of 300.469848633s, submitted: 90
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107446272 unmapped: 12599296 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107446272 unmapped: 12599296 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107446272 unmapped: 12599296 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: do_command 'config diff' '{prefix=config diff}'
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: do_command 'config show' '{prefix=config show}'
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: do_command 'counter dump' '{prefix=counter dump}'
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107405312 unmapped: 12640256 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: do_command 'counter schema' '{prefix=counter schema}'
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107479040 unmapped: 12566528 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fae91000/0x0/0x4ffc00000, data 0xefc767/0xffb000, compress 0x0/0x0/0x0, omap 0x1acf4, meta 0x3d5530c), peers [0,1] op hist [])
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265261 data_alloc: 218103808 data_used: 5363
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 12795904 heap: 120045568 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:00 np0005545273 ceph-osd[88205]: do_command 'log dump' '{prefix=log dump}'
Dec  4 06:02:00 np0005545273 nova_compute[244644]: 2025-12-04 11:02:00.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 06:02:00 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14896 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 06:02:00 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1583: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec  4 06:02:00 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec  4 06:02:00 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/678272308' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec  4 06:02:00 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14900 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 06:02:01 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.jnsliu", "name": "rgw_frontends"} v 0)
Dec  4 06:02:01 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.jnsliu", "name": "rgw_frontends"} : dispatch
Dec  4 06:02:01 np0005545273 nova_compute[244644]: 2025-12-04 11:02:01.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 06:02:01 np0005545273 nova_compute[244644]: 2025-12-04 11:02:01.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  4 06:02:01 np0005545273 nova_compute[244644]: 2025-12-04 11:02:01.339 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  4 06:02:01 np0005545273 nova_compute[244644]: 2025-12-04 11:02:01.356 244650 DEBUG nova.compute.manager [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  4 06:02:01 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec  4 06:02:01 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1989233441' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec  4 06:02:01 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14904 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  4 06:02:01 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.jnsliu", "name": "rgw_frontends"} v 0)
Dec  4 06:02:01 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1475531463' entity='mgr.compute-0.iwufnj' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.jnsliu", "name": "rgw_frontends"} : dispatch
Dec  4 06:02:01 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14908 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 06:02:01 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec  4 06:02:01 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1075459707' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec  4 06:02:02 np0005545273 nova_compute[244644]: 2025-12-04 11:02:02.338 244650 DEBUG oslo_service.periodic_task [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  4 06:02:02 np0005545273 nova_compute[244644]: 2025-12-04 11:02:02.379 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 06:02:02 np0005545273 nova_compute[244644]: 2025-12-04 11:02:02.380 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 06:02:02 np0005545273 nova_compute[244644]: 2025-12-04 11:02:02.380 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 06:02:02 np0005545273 nova_compute[244644]: 2025-12-04 11:02:02.385 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  4 06:02:02 np0005545273 nova_compute[244644]: 2025-12-04 11:02:02.386 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 06:02:02 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1584: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec  4 06:02:02 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14912 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  4 06:02:02 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  4 06:02:02 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2318634962' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec  4 06:02:03 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 06:02:03 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/726729182' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 06:02:03 np0005545273 nova_compute[244644]: 2025-12-04 11:02:03.036 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.651s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 06:02:03 np0005545273 nova_compute[244644]: 2025-12-04 11:02:03.210 244650 WARNING nova.virt.libvirt.driver [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  4 06:02:03 np0005545273 nova_compute[244644]: 2025-12-04 11:02:03.211 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4802MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  4 06:02:03 np0005545273 nova_compute[244644]: 2025-12-04 11:02:03.211 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  4 06:02:03 np0005545273 nova_compute[244644]: 2025-12-04 11:02:03.212 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  4 06:02:03 np0005545273 nova_compute[244644]: 2025-12-04 11:02:03.333 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  4 06:02:03 np0005545273 nova_compute[244644]: 2025-12-04 11:02:03.334 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  4 06:02:03 np0005545273 nova_compute[244644]: 2025-12-04 11:02:03.389 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  4 06:02:03 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14916 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec  4 06:02:03 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec  4 06:02:03 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2185841392' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec  4 06:02:03 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14922 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  4 06:02:03 np0005545273 podman[276746]: 2025-12-04 11:02:03.971683845 +0000 UTC m=+0.073069825 container health_status fe10987cdf96bb2ef3a634814bc5ff5bb5f3730297d2cec3ce9168b181ccb2f4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  4 06:02:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  4 06:02:04 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/475921126' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  4 06:02:04 np0005545273 nova_compute[244644]: 2025-12-04 11:02:04.034 244650 DEBUG oslo_concurrency.processutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.645s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  4 06:02:04 np0005545273 nova_compute[244644]: 2025-12-04 11:02:04.039 244650 DEBUG nova.compute.provider_tree [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed in ProviderTree for provider: 39e18386-dcd4-4a7a-8441-091a9ba1f70f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  4 06:02:04 np0005545273 nova_compute[244644]: 2025-12-04 11:02:04.160 244650 DEBUG nova.scheduler.client.report [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Inventory has not changed for provider 39e18386-dcd4-4a7a-8441-091a9ba1f70f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  4 06:02:04 np0005545273 nova_compute[244644]: 2025-12-04 11:02:04.178 244650 DEBUG nova.compute.resource_tracker [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  4 06:02:04 np0005545273 nova_compute[244644]: 2025-12-04 11:02:04.178 244650 DEBUG oslo_concurrency.lockutils [None req-af4ddee2-19a9-435c-b0ec-1d20a969ec22 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.967s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  4 06:02:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Dec  4 06:02:04 np0005545273 ceph-mon[75358]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2293761273' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Dec  4 06:02:04 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14926 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  4 06:02:04 np0005545273 ceph-mgr[75651]: log_channel(cluster) log [DBG] : pgmap v1585: 321 pgs: 321 active+clean; 79 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec  4 06:02:04 np0005545273 ceph-mon[75358]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec  4 06:02:04 np0005545273 ceph-mgr[75651]: log_channel(audit) log [DBG] : from='client.14930 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 827392 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 ms_handle_reset con 0x5590067fb800 session 0x559004f09340
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 ms_handle_reset con 0x5590071f1800 session 0x5590071bafc0
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 647168 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 819200 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 300.065643311s of 300.198425293s, submitted: 90
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 892928 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 884736 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 876544 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 868352 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 860160 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread fragmentation_score=0.000141 took=0.000037s
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 851968 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1201.0 total, 600.0 interval#012Cumulative writes: 7142 writes, 28K keys, 7142 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 7142 writes, 1395 syncs, 5.12 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.181       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.18              0.00         1    0.181       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.18              0.00         1    0.181       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1201.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1201.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559004ea78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1201.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 811008 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 802816 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 786432 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 786432 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 786432 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 786432 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 299.836059570s of 299.876525879s, submitted: 22
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 786432 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:04 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fce58000/0x0/0x4ffc00000, data 0x119030/0x1d4000, compress 0x0/0x0/0x0, omap 0xff19, meta 0x2bc00e7), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996597 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 794624 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 117 handle_osd_map epochs [118,118], i have 117, src has [1,118]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 116.996520996s of 117.133117676s, submitted: 90
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 753664 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fce52000/0x0/0x4ffc00000, data 0x11abd4/0x1d8000, compress 0x0/0x0/0x0, omap 0x101ec, meta 0x2bbfe14), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 745472 heap: 82829312 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 119 handle_osd_map epochs [120,120], i have 119, src has [1,120]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 17481728 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 120 ms_handle_reset con 0x559008cfc000 session 0x559009955340
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 90808320 unmapped: 8806400 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136303 data_alloc: 218103808 data_used: 5976
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82518016 unmapped: 17096704 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 120 handle_osd_map epochs [120,121], i have 120, src has [1,121]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 121 ms_handle_reset con 0x559008d7f400 session 0x559008e3d880
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 121 heartbeat osd_stat(store_statfs(0x4fb64b000/0x0/0x4ffc00000, data 0x191e3ca/0x19e1000, compress 0x0/0x0/0x0, omap 0x106c6, meta 0x2bbf93a), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 17080320 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 17080320 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 17080320 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 121 heartbeat osd_stat(store_statfs(0x4fb645000/0x0/0x4ffc00000, data 0x191ffa5/0x19e5000, compress 0x0/0x0/0x0, omap 0x1099d, meta 0x2bbf663), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 17080320 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 121 heartbeat osd_stat(store_statfs(0x4fb645000/0x0/0x4ffc00000, data 0x191ffa5/0x19e5000, compress 0x0/0x0/0x0, omap 0x1099d, meta 0x2bbf663), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141665 data_alloc: 218103808 data_used: 6561
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 17080320 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 121 heartbeat osd_stat(store_statfs(0x4fb645000/0x0/0x4ffc00000, data 0x191ffa5/0x19e5000, compress 0x0/0x0/0x0, omap 0x1099d, meta 0x2bbf663), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 17080320 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb642000/0x0/0x4ffc00000, data 0x1921a24/0x19e8000, compress 0x0/0x0/0x0, omap 0x10c59, meta 0x2bbf3a7), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 17080320 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb642000/0x0/0x4ffc00000, data 0x1921a24/0x19e8000, compress 0x0/0x0/0x0, omap 0x10c59, meta 0x2bbf3a7), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144007 data_alloc: 218103808 data_used: 6561
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb642000/0x0/0x4ffc00000, data 0x1921a24/0x19e8000, compress 0x0/0x0/0x0, omap 0x10c59, meta 0x2bbf3a7), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144007 data_alloc: 218103808 data_used: 6561
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb642000/0x0/0x4ffc00000, data 0x1921a24/0x19e8000, compress 0x0/0x0/0x0, omap 0x10c59, meta 0x2bbf3a7), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.466564178s of 22.644350052s, submitted: 41
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 17072128 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: mgrc handle_mgr_map Got map version 10
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1143287 data_alloc: 218103808 data_used: 6561
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 16195584 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb644000/0x0/0x4ffc00000, data 0x1921a24/0x19e8000, compress 0x0/0x0/0x0, omap 0x111b8, meta 0x2bbee48), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 16195584 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb644000/0x0/0x4ffc00000, data 0x1921a24/0x19e8000, compress 0x0/0x0/0x0, omap 0x111b8, meta 0x2bbee48), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 16195584 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 16064512 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 16064512 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144979 data_alloc: 218103808 data_used: 6561
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 15015936 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb643000/0x0/0x4ffc00000, data 0x1921abf/0x19e9000, compress 0x0/0x0/0x0, omap 0x11671, meta 0x2bbe98f), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 15015936 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 15015936 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 15015936 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb643000/0x0/0x4ffc00000, data 0x1921abf/0x19e9000, compress 0x0/0x0/0x0, omap 0x116d5, meta 0x2bbe92b), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.574191093s of 10.002868652s, submitted: 9
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 15015936 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: mgrc handle_mgr_map Got map version 11
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149895 data_alloc: 218103808 data_used: 6561
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 15007744 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 15007744 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fb641000/0x0/0x4ffc00000, data 0x1921c53/0x19ea000, compress 0x0/0x0/0x0, omap 0x11be1, meta 0x2bbe41f), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 122 handle_osd_map epochs [122,123], i have 123, src has [1,123]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 15007744 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 15007744 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 14999552 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 123 heartbeat osd_stat(store_statfs(0x4fb63b000/0x0/0x4ffc00000, data 0x19239f3/0x19ef000, compress 0x0/0x0/0x0, omap 0x1244f, meta 0x2bbdbb1), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153339 data_alloc: 218103808 data_used: 6561
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 14999552 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 14999552 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 123 heartbeat osd_stat(store_statfs(0x4fb63d000/0x0/0x4ffc00000, data 0x1923a58/0x19ef000, compress 0x0/0x0/0x0, omap 0x125d3, meta 0x2bbda2d), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 14991360 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 14991360 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.307727814s of 10.003003120s, submitted: 55
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 14991360 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157951 data_alloc: 218103808 data_used: 6561
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 14974976 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 14974976 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fb638000/0x0/0x4ffc00000, data 0x19255a1/0x19f2000, compress 0x0/0x0/0x0, omap 0x13360, meta 0x2bbcca0), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 14974976 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 14974976 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 14974976 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1156927 data_alloc: 218103808 data_used: 6561
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 14974976 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fb63a000/0x0/0x4ffc00000, data 0x192566b/0x19f2000, compress 0x0/0x0/0x0, omap 0x13585, meta 0x2bbca7b), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 14974976 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 14950400 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 14925824 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 14925824 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fb63a000/0x0/0x4ffc00000, data 0x1925735/0x19f2000, compress 0x0/0x0/0x0, omap 0x13a6f, meta 0x2bbc591), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1158173 data_alloc: 218103808 data_used: 6561
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 14925824 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 14925824 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.946354866s of 13.003514290s, submitted: 32
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 14917632 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fb63a000/0x0/0x4ffc00000, data 0x1925735/0x19f2000, compress 0x0/0x0/0x0, omap 0x13b4d, meta 0x2bbc4b3), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 14917632 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 14917632 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157455 data_alloc: 218103808 data_used: 6561
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fb63a000/0x0/0x4ffc00000, data 0x1925735/0x19f2000, compress 0x0/0x0/0x0, omap 0x13d95, meta 0x2bbc26b), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84738048 unmapped: 14876672 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84738048 unmapped: 14876672 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 14991360 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fb639000/0x0/0x4ffc00000, data 0x192589a/0x19f3000, compress 0x0/0x0/0x0, omap 0x13f02, meta 0x2bbc0fe), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 14991360 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 14991360 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1158813 data_alloc: 218103808 data_used: 6561
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 14983168 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 14983168 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.955944061s of 10.002535820s, submitted: 20
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fb63c000/0x0/0x4ffc00000, data 0x192585d/0x19f0000, compress 0x0/0x0/0x0, omap 0x14225, meta 0x2bbbddb), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 14966784 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 14966784 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 14966784 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163967 data_alloc: 218103808 data_used: 6561
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 14966784 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 14966784 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fb638000/0x0/0x4ffc00000, data 0x19275c7/0x19f4000, compress 0x0/0x0/0x0, omap 0x149f5, meta 0x2bbb60b), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 14934016 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 14934016 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 14934016 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 125 handle_osd_map epochs [125,126], i have 126, src has [1,126]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb639000/0x0/0x4ffc00000, data 0x19275f6/0x19f3000, compress 0x0/0x0/0x0, omap 0x14b15, meta 0x2bbb4eb), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165991 data_alloc: 218103808 data_used: 6561
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 14934016 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb634000/0x0/0x4ffc00000, data 0x19290da/0x19f6000, compress 0x0/0x0/0x0, omap 0x14d43, meta 0x2bbb2bd), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 14934016 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.678595543s of 10.002803802s, submitted: 82
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 14934016 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 14934016 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb636000/0x0/0x4ffc00000, data 0x1929209/0x19f6000, compress 0x0/0x0/0x0, omap 0x1422b, meta 0x2bbbdd5), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166245 data_alloc: 218103808 data_used: 6561
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb636000/0x0/0x4ffc00000, data 0x1929209/0x19f6000, compress 0x0/0x0/0x0, omap 0x14347, meta 0x2bbbcb9), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165655 data_alloc: 218103808 data_used: 6561
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb637000/0x0/0x4ffc00000, data 0x1929238/0x19f5000, compress 0x0/0x0/0x0, omap 0x1457f, meta 0x2bbba81), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.570578575s of 13.004203796s, submitted: 16
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165495 data_alloc: 218103808 data_used: 6561
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb637000/0x0/0x4ffc00000, data 0x1929238/0x19f5000, compress 0x0/0x0/0x0, omap 0x14627, meta 0x2bbb9d9), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 14909440 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 14770176 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb635000/0x0/0x4ffc00000, data 0x19293e5/0x19f7000, compress 0x0/0x0/0x0, omap 0x14747, meta 0x2bbb8b9), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: mgrc handle_mgr_map Got map version 12
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 14696448 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170315 data_alloc: 218103808 data_used: 6561
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 14688256 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb635000/0x0/0x4ffc00000, data 0x19296ba/0x19f7000, compress 0x0/0x0/0x0, omap 0x14867, meta 0x2bbb799), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 14655488 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 14655488 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fb631000/0x0/0x4ffc00000, data 0x192b35a/0x19fb000, compress 0x0/0x0/0x0, omap 0x14ad6, meta 0x2bbb52a), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 14655488 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fb631000/0x0/0x4ffc00000, data 0x192b35a/0x19fb000, compress 0x0/0x0/0x0, omap 0x14ad6, meta 0x2bbb52a), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 14655488 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.887221336s of 10.005003929s, submitted: 46
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173441 data_alloc: 218103808 data_used: 6561
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 14647296 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 14647296 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 14622720 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 128 handle_osd_map epochs [128,129], i have 128, src has [1,129]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 129 heartbeat osd_stat(store_statfs(0x4fb62d000/0x0/0x4ffc00000, data 0x192d066/0x19fd000, compress 0x0/0x0/0x0, omap 0x14fe9, meta 0x2bbb017), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 14540800 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 14540800 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183767 data_alloc: 218103808 data_used: 6561
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85098496 unmapped: 14516224 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fb624000/0x0/0x4ffc00000, data 0x1930971/0x1a02000, compress 0x0/0x0/0x0, omap 0x15745, meta 0x2bba8bb), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 130 handle_osd_map epochs [130,131], i have 130, src has [1,131]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 87220224 unmapped: 12394496 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 87220224 unmapped: 12394496 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 12361728 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 12345344 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fb61f000/0x0/0x4ffc00000, data 0x19360be/0x1a0b000, compress 0x0/0x0/0x0, omap 0x1637b, meta 0x2bb9c85), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.471419334s of 10.002448082s, submitted: 188
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194735 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 12328960 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 12328960 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 134 handle_osd_map epochs [135,135], i have 135, src has [1,135]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 134 handle_osd_map epochs [135,135], i have 135, src has [1,135]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 87302144 unmapped: 12312576 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 87351296 unmapped: 12263424 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 87351296 unmapped: 12263424 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x1939a3d/0x1a12000, compress 0x0/0x0/0x0, omap 0x16e2c, meta 0x2bb91d4), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199515 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.264224052s of 10.198055267s, submitted: 77
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198365 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b56b/0x1a14000, compress 0x0/0x0/0x0, omap 0x1841a, meta 0x2bb7be6), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb617000/0x0/0x4ffc00000, data 0x193b606/0x1a15000, compress 0x0/0x0/0x0, omap 0x18532, meta 0x2bb7ace), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199897 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb617000/0x0/0x4ffc00000, data 0x193b606/0x1a15000, compress 0x0/0x0/0x0, omap 0x18532, meta 0x2bb7ace), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb617000/0x0/0x4ffc00000, data 0x193b606/0x1a15000, compress 0x0/0x0/0x0, omap 0x18532, meta 0x2bb7ace), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.554255486s of 10.002036095s, submitted: 6
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199753 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb617000/0x0/0x4ffc00000, data 0x193b66b/0x1a15000, compress 0x0/0x0/0x0, omap 0x189ae, meta 0x2bb7652), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 11214848 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88408064 unmapped: 11206656 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88408064 unmapped: 11206656 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b635/0x1a14000, compress 0x0/0x0/0x0, omap 0x1887a, meta 0x2bb7786), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88408064 unmapped: 11206656 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199323 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88408064 unmapped: 11206656 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b635/0x1a14000, compress 0x0/0x0/0x0, omap 0x1887a, meta 0x2bb7786), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199323 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b635/0x1a14000, compress 0x0/0x0/0x0, omap 0x1887a, meta 0x2bb7786), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b635/0x1a14000, compress 0x0/0x0/0x0, omap 0x1887a, meta 0x2bb7786), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.060717583s of 14.003334045s, submitted: 8
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199163 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b635/0x1a14000, compress 0x0/0x0/0x0, omap 0x1ace7, meta 0x2bb5319), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b6ff/0x1a14000, compress 0x0/0x0/0x0, omap 0x1ad2e, meta 0x2bb52d2), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199163 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b6ff/0x1a14000, compress 0x0/0x0/0x0, omap 0x1ad2e, meta 0x2bb52d2), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b6ff/0x1a14000, compress 0x0/0x0/0x0, omap 0x1ad2e, meta 0x2bb52d2), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b6ff/0x1a14000, compress 0x0/0x0/0x0, omap 0x1ad2e, meta 0x2bb52d2), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.796292305s of 10.001618385s, submitted: 11
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b7c9/0x1a14000, compress 0x0/0x0/0x0, omap 0x1ad2e, meta 0x2bb52d2), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199323 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 11198464 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 136 ms_handle_reset con 0x559008d81400 session 0x55900771efc0
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 10788864 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: mgrc handle_mgr_map Got map version 13
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b7c9/0x1a14000, compress 0x0/0x0/0x0, omap 0x1ad2e, meta 0x2bb52d2), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199163 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 10788864 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 10788864 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193b7c9/0x1a14000, compress 0x0/0x0/0x0, omap 0x1ad2e, meta 0x2bb52d2), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88834048 unmapped: 10780672 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88842240 unmapped: 10772480 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.715125084s of 10.001788139s, submitted: 197
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb617000/0x0/0x4ffc00000, data 0x193b864/0x1a15000, compress 0x0/0x0/0x0, omap 0x1b9f0, meta 0x2bb4610), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88842240 unmapped: 10772480 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200871 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88842240 unmapped: 10772480 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88850432 unmapped: 10764288 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88850432 unmapped: 10764288 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88866816 unmapped: 10747904 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb617000/0x0/0x4ffc00000, data 0x193b993/0x1a15000, compress 0x0/0x0/0x0, omap 0x1c126, meta 0x2bb3eda), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 10731520 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200281 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 10715136 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 10715136 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 10715136 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193ba27/0x1a14000, compress 0x0/0x0/0x0, omap 0x1c596, meta 0x2bb3a6a), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 10715136 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 10715136 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200297 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 10715136 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 10706944 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.359195709s of 13.108038902s, submitted: 15
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 10706944 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193ba27/0x1a14000, compress 0x0/0x0/0x0, omap 0x1c66b, meta 0x2bb3995), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 10706944 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb618000/0x0/0x4ffc00000, data 0x193ba27/0x1a14000, compress 0x0/0x0/0x0, omap 0x1c66b, meta 0x2bb3995), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 10706944 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200121 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 10706944 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 10665984 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 10665984 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88973312 unmapped: 10641408 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb616000/0x0/0x4ffc00000, data 0x193bc27/0x1a16000, compress 0x0/0x0/0x0, omap 0x1cde8, meta 0x2bb3218), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88973312 unmapped: 10641408 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205053 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88973312 unmapped: 10641408 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88973312 unmapped: 10641408 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.005226135s of 10.002261162s, submitted: 14
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88973312 unmapped: 10641408 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88997888 unmapped: 10616832 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88997888 unmapped: 10616832 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb616000/0x0/0x4ffc00000, data 0x193bdbb/0x1a16000, compress 0x0/0x0/0x0, omap 0x1d70f, meta 0x2bb28f1), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205867 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88997888 unmapped: 10616832 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb616000/0x0/0x4ffc00000, data 0x193bdbb/0x1a16000, compress 0x0/0x0/0x0, omap 0x1d70f, meta 0x2bb28f1), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88997888 unmapped: 10616832 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 88997888 unmapped: 10616832 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 10559488 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89055232 unmapped: 10559488 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206603 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb615000/0x0/0x4ffc00000, data 0x193bf4f/0x1a17000, compress 0x0/0x0/0x0, omap 0x1de8c, meta 0x2bb2174), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 10534912 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 10534912 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.561585426s of 10.002349854s, submitted: 22
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb615000/0x0/0x4ffc00000, data 0x193bf4f/0x1a17000, compress 0x0/0x0/0x0, omap 0x1e199, meta 0x2bb1e67), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 10534912 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89088000 unmapped: 10526720 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb614000/0x0/0x4ffc00000, data 0x193c0b4/0x1a18000, compress 0x0/0x0/0x0, omap 0x1e227, meta 0x2bb1dd9), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89088000 unmapped: 10526720 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206715 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89088000 unmapped: 10526720 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 10510336 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 10510336 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fb613000/0x0/0x4ffc00000, data 0x193dd17/0x1a19000, compress 0x0/0x0/0x0, omap 0x1eb87, meta 0x2bb1479), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fb613000/0x0/0x4ffc00000, data 0x193dd17/0x1a19000, compress 0x0/0x0/0x0, omap 0x1eb87, meta 0x2bb1479), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 10510336 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 10510336 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212201 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 10510336 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60f000/0x0/0x4ffc00000, data 0x193f7c5/0x1a1b000, compress 0x0/0x0/0x0, omap 0x1f058, meta 0x2bb0fa8), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 10510336 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.303206444s of 10.570754051s, submitted: 62
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60f000/0x0/0x4ffc00000, data 0x193f7c5/0x1a1b000, compress 0x0/0x0/0x0, omap 0x1f058, meta 0x2bb0fa8), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 10510336 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 10510336 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 10510336 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60e000/0x0/0x4ffc00000, data 0x193f860/0x1a1c000, compress 0x0/0x0/0x0, omap 0x1f202, meta 0x2bb0dfe), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213893 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 10510336 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89128960 unmapped: 10485760 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89128960 unmapped: 10485760 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89128960 unmapped: 10485760 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89128960 unmapped: 10485760 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60f000/0x0/0x4ffc00000, data 0x193f8fb/0x1a1d000, compress 0x0/0x0/0x0, omap 0x1f4c8, meta 0x2bb0b38), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214721 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60f000/0x0/0x4ffc00000, data 0x193f8fb/0x1a1d000, compress 0x0/0x0/0x0, omap 0x1f4c8, meta 0x2bb0b38), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89128960 unmapped: 10485760 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.350849152s of 12.433691978s, submitted: 6
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60f000/0x0/0x4ffc00000, data 0x193f8fb/0x1a1d000, compress 0x0/0x0/0x0, omap 0x1f863, meta 0x2bb079d), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214737 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb610000/0x0/0x4ffc00000, data 0x193f98f/0x1a1c000, compress 0x0/0x0/0x0, omap 0x1fbb7, meta 0x2bb0449), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214163 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb610000/0x0/0x4ffc00000, data 0x193f98f/0x1a1c000, compress 0x0/0x0/0x0, omap 0x1fcd3, meta 0x2bb032d), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.983986855s of 10.001843452s, submitted: 9
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213987 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 10477568 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb610000/0x0/0x4ffc00000, data 0x193f98f/0x1a1c000, compress 0x0/0x0/0x0, omap 0x1ffe0, meta 0x2bb0020), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89145344 unmapped: 10469376 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89145344 unmapped: 10469376 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89161728 unmapped: 10452992 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb610000/0x0/0x4ffc00000, data 0x193fabe/0x1a1c000, compress 0x0/0x0/0x0, omap 0x200fc, meta 0x2baff04), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89145344 unmapped: 10469376 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213987 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89145344 unmapped: 10469376 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89169920 unmapped: 10444800 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89178112 unmapped: 10436608 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89178112 unmapped: 10436608 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60f000/0x0/0x4ffc00000, data 0x193fb32/0x1a1d000, compress 0x0/0x0/0x0, omap 0x20450, meta 0x2bafbb0), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89178112 unmapped: 10436608 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.504415512s of 10.521648407s, submitted: 9
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215695 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89178112 unmapped: 10436608 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89186304 unmapped: 10428416 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60f000/0x0/0x4ffc00000, data 0x193faee/0x1a1d000, compress 0x0/0x0/0x0, omap 0x2075d, meta 0x2baf8a3), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89186304 unmapped: 10428416 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89194496 unmapped: 10420224 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60e000/0x0/0x4ffc00000, data 0x193fcb6/0x1a1d000, compress 0x0/0x0/0x0, omap 0x20995, meta 0x2baf66b), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89194496 unmapped: 10420224 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60e000/0x0/0x4ffc00000, data 0x193fd51/0x1a1e000, compress 0x0/0x0/0x0, omap 0x20af8, meta 0x2baf508), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218185 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89194496 unmapped: 10420224 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89194496 unmapped: 10420224 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60e000/0x0/0x4ffc00000, data 0x193fd51/0x1a1e000, compress 0x0/0x0/0x0, omap 0x20af8, meta 0x2baf508), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89194496 unmapped: 10420224 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89194496 unmapped: 10420224 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89194496 unmapped: 10420224 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.069396019s of 10.170839310s, submitted: 20
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217611 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89202688 unmapped: 10412032 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60f000/0x0/0x4ffc00000, data 0x193fd52/0x1a1d000, compress 0x0/0x0/0x0, omap 0x20eda, meta 0x2baf126), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89202688 unmapped: 10412032 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89202688 unmapped: 10412032 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60f000/0x0/0x4ffc00000, data 0x193fdb7/0x1a1d000, compress 0x0/0x0/0x0, omap 0x211e7, meta 0x2baee19), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89202688 unmapped: 10412032 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89202688 unmapped: 10412032 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218729 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89202688 unmapped: 10412032 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89210880 unmapped: 10403840 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89210880 unmapped: 10403840 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb60f000/0x0/0x4ffc00000, data 0x193fe1c/0x1a1d000, compress 0x0/0x0/0x0, omap 0x21610, meta 0x2bae9f0), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89235456 unmapped: 10379264 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89235456 unmapped: 10379264 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb610000/0x0/0x4ffc00000, data 0x193fe4b/0x1a1c000, compress 0x0/0x0/0x0, omap 0x217ba, meta 0x2bae846), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217995 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89235456 unmapped: 10379264 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89235456 unmapped: 10379264 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.992568970s of 12.027859688s, submitted: 18
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89571328 unmapped: 10043392 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89571328 unmapped: 10043392 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89571328 unmapped: 10043392 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb5f7000/0x0/0x4ffc00000, data 0x19586a5/0x1a35000, compress 0x0/0x0/0x0, omap 0x21801, meta 0x2bae7ff), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223627 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 9969664 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 92332032 unmapped: 7282688 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fa42a000/0x0/0x4ffc00000, data 0x1985ab0/0x1a62000, compress 0x0/0x0/0x0, omap 0x21964, meta 0x3d4e69c), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 92332032 unmapped: 7282688 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 92471296 unmapped: 7143424 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 92528640 unmapped: 7086080 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234575 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 92815360 unmapped: 6799360 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 92766208 unmapped: 6848512 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fa3e9000/0x0/0x4ffc00000, data 0x19c5e65/0x1aa3000, compress 0x0/0x0/0x0, omap 0x22199, meta 0x3d4de67), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fa3e1000/0x0/0x4ffc00000, data 0x19cdc69/0x1aab000, compress 0x0/0x0/0x0, omap 0x22271, meta 0x3d4dd8f), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.162047386s of 10.287876129s, submitted: 58
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 92839936 unmapped: 6774784 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 6479872 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 93224960 unmapped: 6389760 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fa3c0000/0x0/0x4ffc00000, data 0x19ed4c4/0x1acc000, compress 0x0/0x0/0x0, omap 0x22541, meta 0x3d4dabf), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228761 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 92905472 unmapped: 6709248 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fa3c0000/0x0/0x4ffc00000, data 0x19ed4c4/0x1acc000, compress 0x0/0x0/0x0, omap 0x22739, meta 0x3d4d8c7), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 6610944 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 93298688 unmapped: 6316032 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 93339648 unmapped: 6275072 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 93347840 unmapped: 6266880 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232545 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 93585408 unmapped: 6029312 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fa387000/0x0/0x4ffc00000, data 0x1a2844f/0x1b05000, compress 0x0/0x0/0x0, omap 0x22a96, meta 0x3d4d56a), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 94855168 unmapped: 4759552 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.685272217s of 10.002084732s, submitted: 83
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 94691328 unmapped: 4923392 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 5120000 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 94666752 unmapped: 4947968 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245549 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 94920704 unmapped: 4694016 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 94797824 unmapped: 4816896 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa305000/0x0/0x4ffc00000, data 0x1aa81ef/0x1b87000, compress 0x0/0x0/0x0, omap 0x235e8, meta 0x3d4ca18), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 94797824 unmapped: 4816896 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 94928896 unmapped: 4685824 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa2fb000/0x0/0x4ffc00000, data 0x1ab291d/0x1b91000, compress 0x0/0x0/0x0, omap 0x238ae, meta 0x3d4c752), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 139 handle_osd_map epochs [139,140], i have 140, src has [1,140]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 95059968 unmapped: 4554752 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253243 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 95166464 unmapped: 4448256 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa280000/0x0/0x4ffc00000, data 0x1b28cb3/0x1c0a000, compress 0x0/0x0/0x0, omap 0x2443b, meta 0x3d4bbc5), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 96575488 unmapped: 3039232 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.631328583s of 10.002451897s, submitted: 112
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 96026624 unmapped: 3588096 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 96157696 unmapped: 3457024 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 96247808 unmapped: 3366912 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa24d000/0x0/0x4ffc00000, data 0x1b5d7a7/0x1c3e000, compress 0x0/0x0/0x0, omap 0x249db, meta 0x3d4b625), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1254393 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 96452608 unmapped: 3162112 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 96550912 unmapped: 3063808 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 96550912 unmapped: 3063808 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 96337920 unmapped: 3276800 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 96337920 unmapped: 3276800 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa1ec000/0x0/0x4ffc00000, data 0x1bbf955/0x1ca0000, compress 0x0/0x0/0x0, omap 0x25173, meta 0x3d4ae8d), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264037 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 97558528 unmapped: 2056192 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 97779712 unmapped: 1835008 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.661271095s of 10.002766609s, submitted: 80
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 97837056 unmapped: 1777664 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 97927168 unmapped: 1687552 heap: 99614720 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 98123776 unmapped: 2539520 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1268593 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa14e000/0x0/0x4ffc00000, data 0x1c5b085/0x1d3d000, compress 0x0/0x0/0x0, omap 0x25833, meta 0x3d4a7cd), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 98238464 unmapped: 2424832 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 98287616 unmapped: 2375680 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 98492416 unmapped: 2170880 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 98492416 unmapped: 2170880 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 98820096 unmapped: 1843200 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa137000/0x0/0x4ffc00000, data 0x1c75450/0x1d55000, compress 0x0/0x0/0x0, omap 0x25a20, meta 0x3d4a5e0), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa10b000/0x0/0x4ffc00000, data 0x1ca12c8/0x1d81000, compress 0x0/0x0/0x0, omap 0x25f3b, meta 0x3d4a0c5), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267149 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 589824 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 589824 heap: 100663296 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.730478287s of 10.002023697s, submitted: 71
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100179968 unmapped: 1531904 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa0b1000/0x0/0x4ffc00000, data 0x1cf9eea/0x1ddb000, compress 0x0/0x0/0x0, omap 0x261c3, meta 0x3d49e3d), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1712128 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa0b1000/0x0/0x4ffc00000, data 0x1cf9eea/0x1ddb000, compress 0x0/0x0/0x0, omap 0x262e3, meta 0x3d49d1d), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1712128 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277587 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100196352 unmapped: 1515520 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99827712 unmapped: 1884160 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99827712 unmapped: 1884160 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99827712 unmapped: 1884160 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99688448 unmapped: 2023424 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa065000/0x0/0x4ffc00000, data 0x1d45c1e/0x1e27000, compress 0x0/0x0/0x0, omap 0x26ca5, meta 0x3d4935b), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283849 data_alloc: 218103808 data_used: 7211
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99688448 unmapped: 2023424 heap: 101711872 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99688448 unmapped: 3072000 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.822227478s of 10.002123833s, submitted: 99
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99696640 unmapped: 3063808 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fa061000/0x0/0x4ffc00000, data 0x1d479b6/0x1e2b000, compress 0x0/0x0/0x0, omap 0x27427, meta 0x3d48bd9), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99696640 unmapped: 3063808 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fa061000/0x0/0x4ffc00000, data 0x1d47ae5/0x1e2b000, compress 0x0/0x0/0x0, omap 0x274b7, meta 0x3d48b49), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99696640 unmapped: 3063808 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287513 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99696640 unmapped: 3063808 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99696640 unmapped: 3063808 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fa05c000/0x0/0x4ffc00000, data 0x1d49580/0x1e2e000, compress 0x0/0x0/0x0, omap 0x27c2c, meta 0x3d483d4), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99696640 unmapped: 3063808 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99696640 unmapped: 3063808 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99696640 unmapped: 3063808 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fa05e000/0x0/0x4ffc00000, data 0x1d4964a/0x1e2e000, compress 0x0/0x0/0x0, omap 0x27e6c, meta 0x3d48194), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287719 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99696640 unmapped: 3063808 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99704832 unmapped: 3055616 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99704832 unmapped: 3055616 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99704832 unmapped: 3055616 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.905422211s of 11.945888519s, submitted: 27
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fa05e000/0x0/0x4ffc00000, data 0x1d4964a/0x1e2e000, compress 0x0/0x0/0x0, omap 0x27c0c, meta 0x3d483f4), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99704832 unmapped: 3055616 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1294435 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 3047424 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 3047424 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99721216 unmapped: 3039232 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fa058000/0x0/0x4ffc00000, data 0x1d4cd05/0x1e34000, compress 0x0/0x0/0x0, omap 0x285b1, meta 0x3d47a4f), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99721216 unmapped: 3039232 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99721216 unmapped: 3039232 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293795 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99721216 unmapped: 3039232 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99721216 unmapped: 3039232 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99729408 unmapped: 3031040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fa054000/0x0/0x4ffc00000, data 0x1d4ea03/0x1e36000, compress 0x0/0x0/0x0, omap 0x28e3e, meta 0x3d471c2), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99729408 unmapped: 3031040 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.821330070s of 10.003786087s, submitted: 83
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99745792 unmapped: 3014656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1299313 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99745792 unmapped: 3014656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99745792 unmapped: 3014656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa052000/0x0/0x4ffc00000, data 0x1d504cd/0x1e38000, compress 0x0/0x0/0x0, omap 0x2966d, meta 0x3d46993), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99745792 unmapped: 3014656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99745792 unmapped: 3014656 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99762176 unmapped: 2998272 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306283 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 2981888 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 2981888 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 2981888 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fa04e000/0x0/0x4ffc00000, data 0x1d53e14/0x1e3e000, compress 0x0/0x0/0x0, omap 0x2a2ce, meta 0x3d45d32), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99786752 unmapped: 2973696 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.688855171s of 10.056839943s, submitted: 73
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 2965504 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306057 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99827712 unmapped: 2932736 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99835904 unmapped: 2924544 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99835904 unmapped: 2924544 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 99835904 unmapped: 2924544 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fa04a000/0x0/0x4ffc00000, data 0x1d55ba7/0x1e40000, compress 0x0/0x0/0x0, omap 0x2ab5c, meta 0x3d454a4), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100868096 unmapped: 1892352 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1311575 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100868096 unmapped: 1892352 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 1884160 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 1884160 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 1884160 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa049000/0x0/0x4ffc00000, data 0x1d576b7/0x1e43000, compress 0x0/0x0/0x0, omap 0x2b14e, meta 0x3d44eb2), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 1884160 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310999 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 1884160 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 1884160 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.768828392s of 13.002545357s, submitted: 53
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 1884160 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 1884160 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa049000/0x0/0x4ffc00000, data 0x1d576b7/0x1e43000, compress 0x0/0x0/0x0, omap 0x2b195, meta 0x3d44e6b), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 1875968 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315337 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 1875968 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 1875968 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 1875968 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 1875968 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fa044000/0x0/0x4ffc00000, data 0x1d59284/0x1e47000, compress 0x0/0x0/0x0, omap 0x2b785, meta 0x3d4487b), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100900864 unmapped: 1859584 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315321 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100900864 unmapped: 1859584 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 1851392 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.965646744s of 10.002288818s, submitted: 29
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 1851392 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 1851392 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fa044000/0x0/0x4ffc00000, data 0x1d592e8/0x1e47000, compress 0x0/0x0/0x0, omap 0x2bcca, meta 0x3d44336), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 1851392 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1316439 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 1851392 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fa045000/0x0/0x4ffc00000, data 0x1d592e6/0x1e47000, compress 0x0/0x0/0x0, omap 0x2bfd7, meta 0x3d44029), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315689 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fa046000/0x0/0x4ffc00000, data 0x1d59285/0x1e46000, compress 0x0/0x0/0x0, omap 0x2c447, meta 0x3d43bb9), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.925606728s of 11.002860069s, submitted: 14
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fa046000/0x0/0x4ffc00000, data 0x1d59285/0x1e46000, compress 0x0/0x0/0x0, omap 0x2c447, meta 0x3d43bb9), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315545 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100933632 unmapped: 1826816 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fa045000/0x0/0x4ffc00000, data 0x1d592ea/0x1e47000, compress 0x0/0x0/0x0, omap 0x2ca1a, meta 0x3d435e6), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100941824 unmapped: 1818624 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315929 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100941824 unmapped: 1818624 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100941824 unmapped: 1818624 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fa045000/0x0/0x4ffc00000, data 0x1d592ea/0x1e47000, compress 0x0/0x0/0x0, omap 0x2cce0, meta 0x3d43320), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100950016 unmapped: 1810432 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.980464935s of 10.001944542s, submitted: 11
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100950016 unmapped: 1810432 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1801.0 total, 600.0 interval#012Cumulative writes: 10K writes, 38K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 10K writes, 2807 syncs, 3.71 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3259 writes, 9856 keys, 3259 commit groups, 1.0 writes per commit group, ingest: 8.44 MB, 0.01 MB/s#012Interval WAL: 3259 writes, 1412 syncs, 2.31 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100950016 unmapped: 1810432 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315801 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100950016 unmapped: 1810432 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 1802240 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fa045000/0x0/0x4ffc00000, data 0x1d594e3/0x1e47000, compress 0x0/0x0/0x0, omap 0x2d07b, meta 0x3d42f85), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fa045000/0x0/0x4ffc00000, data 0x1d594e3/0x1e47000, compress 0x0/0x0/0x0, omap 0x2d07b, meta 0x3d42f85), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 1802240 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 1802240 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100966400 unmapped: 1794048 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315801 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 1785856 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 1785856 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 1785856 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.627371788s of 10.002052307s, submitted: 15
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fa045000/0x0/0x4ffc00000, data 0x1d59677/0x1e47000, compress 0x0/0x0/0x0, omap 0x2d5c0, meta 0x3d42a40), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 1785856 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 1785856 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315817 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 1785856 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 ms_handle_reset con 0x559005f52800 session 0x559004ecc000
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 101113856 unmapped: 1646592 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: mgrc ms_handle_reset ms_handle_reset con 0x5590067fa000
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/762197634
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: mgrc handle_mgr_configure stats_period=5
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 101097472 unmapped: 1662976 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 ms_handle_reset con 0x559009534400 session 0x559007202a80
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 ms_handle_reset con 0x559007746000 session 0x559008c45180
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fa043000/0x0/0x4ffc00000, data 0x1d5980a/0x1e48000, compress 0x0/0x0/0x0, omap 0x2da77, meta 0x3d42589), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 1531904 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 101228544 unmapped: 1531904 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1319383 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 101236736 unmapped: 1523712 heap: 102760448 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 101384192 unmapped: 2424832 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fa01a000/0x0/0x4ffc00000, data 0x1d83259/0x1e72000, compress 0x0/0x0/0x0, omap 0x2dabe, meta 0x3d42542), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 101384192 unmapped: 2424832 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 101523456 unmapped: 2285568 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 101523456 unmapped: 2285568 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.993534088s of 11.781046867s, submitted: 23
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330843 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 1097728 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 102858752 unmapped: 950272 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 102367232 unmapped: 1441792 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9fd7000/0x0/0x4ffc00000, data 0x1dc6753/0x1eb5000, compress 0x0/0x0/0x0, omap 0x2dcaf, meta 0x3d42351), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 102449152 unmapped: 1359872 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 102449152 unmapped: 1359872 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9fac000/0x0/0x4ffc00000, data 0x1df0ce0/0x1ee0000, compress 0x0/0x0/0x0, omap 0x2ddcb, meta 0x3d42235), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1326003 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 102604800 unmapped: 1204224 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 102793216 unmapped: 1015808 heap: 103809024 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9f57000/0x0/0x4ffc00000, data 0x1e467ac/0x1f35000, compress 0x0/0x0/0x0, omap 0x2e166, meta 0x3d41e9a), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 103964672 unmapped: 1941504 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9f57000/0x0/0x4ffc00000, data 0x1e467ac/0x1f35000, compress 0x0/0x0/0x0, omap 0x2e166, meta 0x3d41e9a), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 103964672 unmapped: 1941504 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 104202240 unmapped: 1703936 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.055461884s of 10.391435623s, submitted: 58
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330561 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 104349696 unmapped: 1556480 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9f46000/0x0/0x4ffc00000, data 0x1e57622/0x1f46000, compress 0x0/0x0/0x0, omap 0x2e166, meta 0x3d41e9a), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 104349696 unmapped: 1556480 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 104349696 unmapped: 1556480 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9ef1000/0x0/0x4ffc00000, data 0x1eab017/0x1f9b000, compress 0x0/0x0/0x0, omap 0x2e1f4, meta 0x3d41e0c), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 104546304 unmapped: 1359872 heap: 105906176 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 106004480 unmapped: 950272 heap: 106954752 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: mgrc handle_mgr_map Got map version 14
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351197 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 794624 heap: 106954752 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9e8f000/0x0/0x4ffc00000, data 0x1f0c618/0x1ffd000, compress 0x0/0x0/0x0, omap 0x2e61d, meta 0x3d419e3), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105594880 unmapped: 1359872 heap: 106954752 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9e64000/0x0/0x4ffc00000, data 0x1f388bd/0x2028000, compress 0x0/0x0/0x0, omap 0x2e7c7, meta 0x3d41839), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105603072 unmapped: 1351680 heap: 106954752 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105603072 unmapped: 1351680 heap: 106954752 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9e64000/0x0/0x4ffc00000, data 0x1f388bd/0x2028000, compress 0x0/0x0/0x0, omap 0x2e7c7, meta 0x3d41839), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105791488 unmapped: 1163264 heap: 106954752 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.855733871s of 10.000440598s, submitted: 73
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1343261 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 104775680 unmapped: 3227648 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 104988672 unmapped: 3014656 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 2678784 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9e16000/0x0/0x4ffc00000, data 0x1f85c3e/0x2076000, compress 0x0/0x0/0x0, omap 0x2e9b8, meta 0x3d41648), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 2678784 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9e16000/0x0/0x4ffc00000, data 0x1f85c3e/0x2076000, compress 0x0/0x0/0x0, omap 0x2e9b8, meta 0x3d41648), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105324544 unmapped: 2678784 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1348665 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105512960 unmapped: 2490368 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105521152 unmapped: 2482176 heap: 108003328 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105512960 unmapped: 3538944 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9daa000/0x0/0x4ffc00000, data 0x1ff2a65/0x20e2000, compress 0x0/0x0/0x0, omap 0x2ec7e, meta 0x3d41382), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105783296 unmapped: 3268608 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105783296 unmapped: 3268608 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.970839500s of 10.000647545s, submitted: 52
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350107 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105553920 unmapped: 3497984 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9da6000/0x0/0x4ffc00000, data 0x1ff6b45/0x20e6000, compress 0x0/0x0/0x0, omap 0x2ede1, meta 0x3d4121f), peers [0,2] op hist [0,0,0,0,0,1])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105660416 unmapped: 3391488 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105660416 unmapped: 3391488 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105660416 unmapped: 3391488 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9d7a000/0x0/0x4ffc00000, data 0x202215b/0x2112000, compress 0x0/0x0/0x0, omap 0x2ede1, meta 0x3d4121f), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105857024 unmapped: 3194880 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361775 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105889792 unmapped: 3162112 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9d25000/0x0/0x4ffc00000, data 0x2075e62/0x2167000, compress 0x0/0x0/0x0, omap 0x2f060, meta 0x3d40fa0), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 105930752 unmapped: 3121152 heap: 109051904 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 107356160 unmapped: 2744320 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 2523136 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 2523136 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.027631760s of 10.000502586s, submitted: 46
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361939 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 107085824 unmapped: 3014656 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9cce000/0x0/0x4ffc00000, data 0x20cdd17/0x21be000, compress 0x0/0x0/0x0, omap 0x2f3fb, meta 0x3d40c05), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 2826240 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 2809856 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 107667456 unmapped: 2433024 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9c7d000/0x0/0x4ffc00000, data 0x211dc20/0x220f000, compress 0x0/0x0/0x0, omap 0x2f824, meta 0x3d407dc), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 107675648 unmapped: 2424832 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9c6f000/0x0/0x4ffc00000, data 0x212a9c0/0x221c000, compress 0x0/0x0/0x0, omap 0x2f8f9, meta 0x3d40707), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1369473 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 107675648 unmapped: 2424832 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 106913792 unmapped: 3186688 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 106987520 unmapped: 3112960 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 1892352 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9c12000/0x0/0x4ffc00000, data 0x218ace6/0x227a000, compress 0x0/0x0/0x0, omap 0x2faa3, meta 0x3d4055d), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 108650496 unmapped: 1449984 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.208406448s of 10.001956940s, submitted: 71
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1376241 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 1368064 heap: 110100480 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9bad000/0x0/0x4ffc00000, data 0x21ef2fe/0x22df000, compress 0x0/0x0/0x0, omap 0x2fe3e, meta 0x3d401c2), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 108797952 unmapped: 2351104 heap: 111149056 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 109215744 unmapped: 1933312 heap: 111149056 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 108937216 unmapped: 2211840 heap: 111149056 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 108937216 unmapped: 2211840 heap: 111149056 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 ms_handle_reset con 0x559008765000 session 0x559008126540
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1379839 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 109101056 unmapped: 2048000 heap: 111149056 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 109289472 unmapped: 1859584 heap: 111149056 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9b3b000/0x0/0x4ffc00000, data 0x22618f6/0x2351000, compress 0x0/0x0/0x0, omap 0x30383, meta 0x3d3fc7d), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 110370816 unmapped: 778240 heap: 111149056 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 110395392 unmapped: 753664 heap: 111149056 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 110592000 unmapped: 557056 heap: 111149056 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9b0f000/0x0/0x4ffc00000, data 0x228d478/0x237d000, compress 0x0/0x0/0x0, omap 0x3049f, meta 0x3d3fb61), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.538078308s of 10.000422478s, submitted: 61
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1384499 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 110592000 unmapped: 1605632 heap: 112197632 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9af0000/0x0/0x4ffc00000, data 0x22ac7b5/0x239c000, compress 0x0/0x0/0x0, omap 0x306d7, meta 0x3d3f929), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 110600192 unmapped: 1597440 heap: 112197632 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 110641152 unmapped: 1556480 heap: 112197632 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9ad6000/0x0/0x4ffc00000, data 0x22c679d/0x23b6000, compress 0x0/0x0/0x0, omap 0x3083a, meta 0x3d3f7c6), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 110682112 unmapped: 1515520 heap: 112197632 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 110600192 unmapped: 1597440 heap: 112197632 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1386235 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 110469120 unmapped: 2777088 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9a75000/0x0/0x4ffc00000, data 0x2327bf3/0x2417000, compress 0x0/0x0/0x0, omap 0x30bd5, meta 0x3d3f42b), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 111607808 unmapped: 1638400 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 111968256 unmapped: 1277952 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 111730688 unmapped: 1515520 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 111730688 unmapped: 1515520 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.300408363s of 10.016177177s, submitted: 165
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1393819 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 111845376 unmapped: 2449408 heap: 114294784 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f99c3000/0x0/0x4ffc00000, data 0x23d8d96/0x24c9000, compress 0x0/0x0/0x0, omap 0x31161, meta 0x3d3ee9f), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 112238592 unmapped: 2056192 heap: 114294784 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 112238592 unmapped: 2056192 heap: 114294784 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f99c3000/0x0/0x4ffc00000, data 0x23d8d96/0x24c9000, compress 0x0/0x0/0x0, omap 0x3127d, meta 0x3d3ed83), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 111484928 unmapped: 2809856 heap: 114294784 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 2646016 heap: 114294784 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1399779 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 112852992 unmapped: 1441792 heap: 114294784 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 112418816 unmapped: 2924544 heap: 115343360 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 153 heartbeat osd_stat(store_statfs(0x4f9952000/0x0/0x4ffc00000, data 0x244b70c/0x253a000, compress 0x0/0x0/0x0, omap 0x3191a, meta 0x3d3e6e6), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 153 heartbeat osd_stat(store_statfs(0x4f9917000/0x0/0x4ffc00000, data 0x24860cb/0x2575000, compress 0x0/0x0/0x0, omap 0x31a36, meta 0x3d3e5ca), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 112418816 unmapped: 2924544 heap: 115343360 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 112418816 unmapped: 2924544 heap: 115343360 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 112656384 unmapped: 2686976 heap: 115343360 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 153 heartbeat osd_stat(store_statfs(0x4f9917000/0x0/0x4ffc00000, data 0x24860cb/0x2575000, compress 0x0/0x0/0x0, omap 0x31b0b, meta 0x3d3e4f5), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.177977562s of 10.428889275s, submitted: 88
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1405161 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 2506752 heap: 115343360 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 153 heartbeat osd_stat(store_statfs(0x4f98d5000/0x0/0x4ffc00000, data 0x24c8603/0x25b7000, compress 0x0/0x0/0x0, omap 0x31b0b, meta 0x3d3e4f5), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 112844800 unmapped: 2498560 heap: 115343360 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 113123328 unmapped: 2220032 heap: 115343360 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 112844800 unmapped: 2498560 heap: 115343360 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 2449408 heap: 115343360 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f988b000/0x0/0x4ffc00000, data 0x250d15e/0x25fe000, compress 0x0/0x0/0x0, omap 0x31e52, meta 0x3d3e1ae), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1413183 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f988b000/0x0/0x4ffc00000, data 0x250d15e/0x25fe000, compress 0x0/0x0/0x0, omap 0x31fb5, meta 0x3d3e04b), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 2277376 heap: 115343360 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 113074176 unmapped: 2269184 heap: 115343360 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 114368512 unmapped: 974848 heap: 115343360 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 114122752 unmapped: 2269184 heap: 116391936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9849000/0x0/0x4ffc00000, data 0x2550ee0/0x2643000, compress 0x0/0x0/0x0, omap 0x3227b, meta 0x3d3dd85), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 114130944 unmapped: 2260992 heap: 116391936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1413527 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.277583122s of 10.388940811s, submitted: 61
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 113238016 unmapped: 3153920 heap: 116391936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f97f7000/0x0/0x4ffc00000, data 0x25a30f1/0x2695000, compress 0x0/0x0/0x0, omap 0x32397, meta 0x3d3dc69), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 2990080 heap: 116391936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 2990080 heap: 116391936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 2990080 heap: 116391936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 113967104 unmapped: 3473408 heap: 117440512 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f97c7000/0x0/0x4ffc00000, data 0x25d36e4/0x26c5000, compress 0x0/0x0/0x0, omap 0x32732, meta 0x3d3d8ce), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424773 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 3407872 heap: 117440512 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 114040832 unmapped: 3399680 heap: 117440512 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 154 ms_handle_reset con 0x559009277800 session 0x559006a20e00
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 115654656 unmapped: 1785856 heap: 117440512 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 115654656 unmapped: 1785856 heap: 117440512 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: mgrc handle_mgr_map Got map version 15
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f9753000/0x0/0x4ffc00000, data 0x26478b5/0x2739000, compress 0x0/0x0/0x0, omap 0x32d93, meta 0x3d3d26d), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 1679360 heap: 117440512 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426763 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.372922897s of 10.013011932s, submitted: 276
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 155 heartbeat osd_stat(store_statfs(0x4f9745000/0x0/0x4ffc00000, data 0x26522a5/0x2745000, compress 0x0/0x0/0x0, omap 0x330dd, meta 0x3d3cf23), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 115023872 unmapped: 3465216 heap: 118489088 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 115277824 unmapped: 3211264 heap: 118489088 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117514240 unmapped: 2023424 heap: 119537664 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 1605632 heap: 120586240 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 1605632 heap: 120586240 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f7384000/0x0/0x4ffc00000, data 0x26d1331/0x27c6000, compress 0x0/0x0/0x0, omap 0x336ef, meta 0x607c911), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1434585 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 118923264 unmapped: 1662976 heap: 120586240 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117850112 unmapped: 2736128 heap: 120586240 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117850112 unmapped: 2736128 heap: 120586240 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117850112 unmapped: 2736128 heap: 120586240 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f7356000/0x0/0x4ffc00000, data 0x2701ed1/0x27f6000, compress 0x0/0x0/0x0, omap 0x3380b, meta 0x607c7f5), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 118259712 unmapped: 2326528 heap: 120586240 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1441097 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.071870804s of 10.087114334s, submitted: 71
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 118325248 unmapped: 2260992 heap: 120586240 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 118325248 unmapped: 2260992 heap: 120586240 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 118448128 unmapped: 2138112 heap: 120586240 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 118587392 unmapped: 1998848 heap: 120586240 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f72db000/0x0/0x4ffc00000, data 0x277bc0b/0x2871000, compress 0x0/0x0/0x0, omap 0x34016, meta 0x607bfea), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 118538240 unmapped: 3096576 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1443461 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 4022272 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 4022272 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 4022272 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 4022272 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 157 heartbeat osd_stat(store_statfs(0x4f72b5000/0x0/0x4ffc00000, data 0x279f910/0x2895000, compress 0x0/0x0/0x0, omap 0x3447c, meta 0x607bb84), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 3948544 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 157 heartbeat osd_stat(store_statfs(0x4f72a1000/0x0/0x4ffc00000, data 0x27b5ea9/0x28ab000, compress 0x0/0x0/0x0, omap 0x3447c, meta 0x607bb84), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1445049 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117686272 unmapped: 3948544 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117776384 unmapped: 3858432 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117776384 unmapped: 3858432 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 157 heartbeat osd_stat(store_statfs(0x4f72a1000/0x0/0x4ffc00000, data 0x27b5ea9/0x28ab000, compress 0x0/0x0/0x0, omap 0x3447c, meta 0x607bb84), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.240313530s of 12.346166611s, submitted: 51
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 3768320 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 3735552 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f7284000/0x0/0x4ffc00000, data 0x27cf715/0x28c6000, compress 0x0/0x0/0x0, omap 0x348ff, meta 0x607b701), peers [0,2] op hist [0,0,0,0,0,1])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1449311 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 2482176 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f7268000/0x0/0x4ffc00000, data 0x27ecf3f/0x28e4000, compress 0x0/0x0/0x0, omap 0x34a62, meta 0x607b59e), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 119070720 unmapped: 2564096 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 119218176 unmapped: 2416640 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f724e000/0x0/0x4ffc00000, data 0x2806ffd/0x28fe000, compress 0x0/0x0/0x0, omap 0x34c0c, meta 0x607b3f4), peers [0,2] op hist [0,0,0,0,1])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 119349248 unmapped: 2285568 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 119365632 unmapped: 2269184 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1452615 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 119513088 unmapped: 2121728 heap: 121634816 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 2990080 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 2990080 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.959165096s of 10.444355011s, submitted: 96
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 119857152 unmapped: 2826240 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f71d7000/0x0/0x4ffc00000, data 0x287d6fd/0x2975000, compress 0x0/0x0/0x0, omap 0x35035, meta 0x607afcb), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 119889920 unmapped: 2793472 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 158 handle_osd_map epochs [158,159], i have 159, src has [1,159]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1463501 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 119906304 unmapped: 2777088 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120119296 unmapped: 2564096 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f7188000/0x0/0x4ffc00000, data 0x28cc40f/0x29c4000, compress 0x0/0x0/0x0, omap 0x3568c, meta 0x607a974), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120119296 unmapped: 2564096 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120127488 unmapped: 2555904 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f7183000/0x0/0x4ffc00000, data 0x28d1600/0x29c9000, compress 0x0/0x0/0x0, omap 0x3571a, meta 0x607a8e6), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120020992 unmapped: 2662400 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464155 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 2605056 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 2605056 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f714d000/0x0/0x4ffc00000, data 0x2907a5f/0x29ff000, compress 0x0/0x0/0x0, omap 0x3587d, meta 0x607a783), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f714d000/0x0/0x4ffc00000, data 0x2907a5f/0x29ff000, compress 0x0/0x0/0x0, omap 0x3587d, meta 0x607a783), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.447093964s of 11.187532425s, submitted: 79
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f714d000/0x0/0x4ffc00000, data 0x2907a5f/0x29ff000, compress 0x0/0x0/0x0, omap 0x3587d, meta 0x607a783), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464993 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120242176 unmapped: 2441216 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120242176 unmapped: 2441216 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120242176 unmapped: 2441216 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f7148000/0x0/0x4ffc00000, data 0x29094de/0x2a02000, compress 0x0/0x0/0x0, omap 0x35c11, meta 0x607a3ef), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120242176 unmapped: 2441216 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f7148000/0x0/0x4ffc00000, data 0x29094de/0x2a02000, compress 0x0/0x0/0x0, omap 0x35c11, meta 0x607a3ef), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1465549 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f713d000/0x0/0x4ffc00000, data 0x2913807/0x2a0d000, compress 0x0/0x0/0x0, omap 0x35c11, meta 0x607a3ef), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1465549 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f713d000/0x0/0x4ffc00000, data 0x2913807/0x2a0d000, compress 0x0/0x0/0x0, omap 0x35c11, meta 0x607a3ef), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1465549 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 2457600 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f713d000/0x0/0x4ffc00000, data 0x2913807/0x2a0d000, compress 0x0/0x0/0x0, omap 0x35c11, meta 0x607a3ef), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f713d000/0x0/0x4ffc00000, data 0x2913807/0x2a0d000, compress 0x0/0x0/0x0, omap 0x35c11, meta 0x607a3ef), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1465549 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f713d000/0x0/0x4ffc00000, data 0x2913807/0x2a0d000, compress 0x0/0x0/0x0, omap 0x35c11, meta 0x607a3ef), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1465549 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f713d000/0x0/0x4ffc00000, data 0x2913807/0x2a0d000, compress 0x0/0x0/0x0, omap 0x35c11, meta 0x607a3ef), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 2449408 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120242176 unmapped: 2441216 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1465549 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120242176 unmapped: 2441216 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f713d000/0x0/0x4ffc00000, data 0x2913807/0x2a0d000, compress 0x0/0x0/0x0, omap 0x35c11, meta 0x607a3ef), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120242176 unmapped: 2441216 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120242176 unmapped: 2441216 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120242176 unmapped: 2441216 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120242176 unmapped: 2441216 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1465549 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f713d000/0x0/0x4ffc00000, data 0x2913807/0x2a0d000, compress 0x0/0x0/0x0, omap 0x35c11, meta 0x607a3ef), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120242176 unmapped: 2441216 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f713d000/0x0/0x4ffc00000, data 0x2913807/0x2a0d000, compress 0x0/0x0/0x0, omap 0x35c11, meta 0x607a3ef), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 2433024 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 2433024 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.244323730s of 39.156387329s, submitted: 15
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 2465792 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 2465792 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1466141 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 2465792 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 2465792 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f711f000/0x0/0x4ffc00000, data 0x29335ed/0x2a2d000, compress 0x0/0x0/0x0, omap 0x35e49, meta 0x607a1b7), peers [0,2] op hist [0,0,0,0,0,0,1])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f711f000/0x0/0x4ffc00000, data 0x29335ed/0x2a2d000, compress 0x0/0x0/0x0, omap 0x35e49, meta 0x607a1b7), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 2408448 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 2408448 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 1392640 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f70eb000/0x0/0x4ffc00000, data 0x2968543/0x2a61000, compress 0x0/0x0/0x0, omap 0x35ed7, meta 0x607a129), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1471213 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121274368 unmapped: 1409024 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f70ea000/0x0/0x4ffc00000, data 0x29685de/0x2a62000, compress 0x0/0x0/0x0, omap 0x35ed7, meta 0x607a129), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121274368 unmapped: 1409024 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121274368 unmapped: 1409024 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 1392640 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.987424850s of 10.797435760s, submitted: 21
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 1392640 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 160 handle_osd_map epochs [160,161], i have 161, src has [1,161]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472851 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 1384448 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f70e8000/0x0/0x4ffc00000, data 0x296a2dc/0x2a64000, compress 0x0/0x0/0x0, omap 0x36720, meta 0x60798e0), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 1376256 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 1376256 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 1376256 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 1376256 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1471541 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 1376256 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f70e8000/0x0/0x4ffc00000, data 0x296a2dc/0x2a64000, compress 0x0/0x0/0x0, omap 0x36a2d, meta 0x60795d3), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 1368064 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 1368064 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f70e8000/0x0/0x4ffc00000, data 0x296a341/0x2a64000, compress 0x0/0x0/0x0, omap 0x36d3a, meta 0x60792c6), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 1368064 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121323520 unmapped: 1359872 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.852377892s of 10.690566063s, submitted: 53
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1475847 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 1343488 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 1343488 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 1343488 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e4000/0x0/0x4ffc00000, data 0x296bec0/0x2a68000, compress 0x0/0x0/0x0, omap 0x3722f, meta 0x6078dd1), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 1343488 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e4000/0x0/0x4ffc00000, data 0x296bec0/0x2a68000, compress 0x0/0x0/0x0, omap 0x3722f, meta 0x6078dd1), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 1343488 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1475847 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 1343488 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 1343488 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: mgrc handle_mgr_map Got map version 16
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121421824 unmapped: 1261568 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e3000/0x0/0x4ffc00000, data 0x296bfd5/0x2a69000, compress 0x0/0x0/0x0, omap 0x37583, meta 0x6078a7d), peers [0,2] op hist [0,1])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 1114112 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: mgrc handle_mgr_map Got map version 17
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 1114112 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.184496880s of 10.089957237s, submitted: 11
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482063 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 1114112 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e1000/0x0/0x4ffc00000, data 0x296c1d9/0x2a6b000, compress 0x0/0x0/0x0, omap 0x37802, meta 0x60787fe), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 1097728 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 1097728 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 1097728 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 1097728 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481521 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 1097728 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 1097728 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 1097728 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 1097728 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 1097728 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481521 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481521 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481521 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481521 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 1089536 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 1081344 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 1081344 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 1081344 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 1081344 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481521 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 1081344 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 1081344 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 1081344 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 1081344 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 1081344 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481521 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481521 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481521 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481521 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121610240 unmapped: 1073152 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121618432 unmapped: 1064960 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121618432 unmapped: 1064960 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481521 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121618432 unmapped: 1064960 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x378d7, meta 0x6078729), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121618432 unmapped: 1064960 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 56.612331390s of 57.233253479s, submitted: 8
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121626624 unmapped: 1056768 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121626624 unmapped: 1056768 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121626624 unmapped: 1056768 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481585 data_alloc: 218103808 data_used: 7996
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121634816 unmapped: 1048576 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x37965, meta 0x607869b), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 162 ms_handle_reset con 0x559008765000 session 0x5590095501c0
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121405440 unmapped: 1277952 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c258/0x2a6a000, compress 0x0/0x0/0x0, omap 0x37965, meta 0x607869b), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 162 ms_handle_reset con 0x5590091bb000 session 0x559009502700
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121405440 unmapped: 1277952 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121405440 unmapped: 1277952 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: mgrc handle_mgr_map Got map version 18
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121413632 unmapped: 1269760 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481569 data_alloc: 218103808 data_used: 8151
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121552896 unmapped: 1130496 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121552896 unmapped: 1130496 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296c2c1/0x2a6a000, compress 0x0/0x0/0x0, omap 0x37cb9, meta 0x6078347), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.706017494s of 10.825467110s, submitted: 193
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121552896 unmapped: 1130496 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121552896 unmapped: 1130496 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121552896 unmapped: 1130496 heap: 122683392 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1484091 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122601472 unmapped: 1130496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296de89/0x2a6a000, compress 0x0/0x0/0x0, omap 0x3839f, meta 0x6077c61), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296de89/0x2a6a000, compress 0x0/0x0/0x0, omap 0x3839f, meta 0x6077c61), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482925 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f70e2000/0x0/0x4ffc00000, data 0x296de89/0x2a6a000, compress 0x0/0x0/0x0, omap 0x3839f, meta 0x6077c61), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.024420738s of 11.033547401s, submitted: 49
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1486275 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dd000/0x0/0x4ffc00000, data 0x296f908/0x2a6d000, compress 0x0/0x0/0x0, omap 0x386a3, meta 0x607795d), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70de000/0x0/0x4ffc00000, data 0x296f9a3/0x2a6e000, compress 0x0/0x0/0x0, omap 0x38806, meta 0x60777fa), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70de000/0x0/0x4ffc00000, data 0x296f9a3/0x2a6e000, compress 0x0/0x0/0x0, omap 0x38806, meta 0x60777fa), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1487247 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 2088960 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.133322716s of 11.148038864s, submitted: 15
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70de000/0x0/0x4ffc00000, data 0x296f9a3/0x2a6e000, compress 0x0/0x0/0x0, omap 0x38894, meta 0x607776c), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1487247 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dd000/0x0/0x4ffc00000, data 0x296fa3e/0x2a6f000, compress 0x0/0x0/0x0, omap 0x38acc, meta 0x6077534), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1488795 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dd000/0x0/0x4ffc00000, data 0x296fa3e/0x2a6f000, compress 0x0/0x0/0x0, omap 0x38ba1, meta 0x607745f), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121651200 unmapped: 2080768 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1488221 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.623150826s of 11.003303528s, submitted: 12
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 2220032 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70de000/0x0/0x4ffc00000, data 0x296fa6d/0x2a6e000, compress 0x0/0x0/0x0, omap 0x39058, meta 0x6076fa8), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 2220032 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70de000/0x0/0x4ffc00000, data 0x296fa6d/0x2a6e000, compress 0x0/0x0/0x0, omap 0x39058, meta 0x6076fa8), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70de000/0x0/0x4ffc00000, data 0x296fa6d/0x2a6e000, compress 0x0/0x0/0x0, omap 0x39058, meta 0x6076fa8), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 2220032 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 2220032 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 2220032 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1488397 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 2220032 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 2220032 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 2220032 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dc000/0x0/0x4ffc00000, data 0x296fbd2/0x2a70000, compress 0x0/0x0/0x0, omap 0x39556, meta 0x6076aaa), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 2220032 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121511936 unmapped: 2220032 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1490089 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.575130463s of 10.002009392s, submitted: 8
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dd000/0x0/0x4ffc00000, data 0x296fc01/0x2a6f000, compress 0x0/0x0/0x0, omap 0x399c6, meta 0x607663a), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1489051 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dd000/0x0/0x4ffc00000, data 0x296fc01/0x2a6f000, compress 0x0/0x0/0x0, omap 0x39cd3, meta 0x607632d), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dd000/0x0/0x4ffc00000, data 0x296fc01/0x2a6f000, compress 0x0/0x0/0x0, omap 0x39f99, meta 0x6076067), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1489211 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.819437981s of 10.002734184s, submitted: 15
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dd000/0x0/0x4ffc00000, data 0x296fccb/0x2a6f000, compress 0x0/0x0/0x0, omap 0x3a143, meta 0x6075ebd), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dd000/0x0/0x4ffc00000, data 0x296fccb/0x2a6f000, compress 0x0/0x0/0x0, omap 0x3a2a6, meta 0x6075d5a), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dd000/0x0/0x4ffc00000, data 0x296fccb/0x2a6f000, compress 0x0/0x0/0x0, omap 0x3a2a6, meta 0x6075d5a), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dd000/0x0/0x4ffc00000, data 0x296fccb/0x2a6f000, compress 0x0/0x0/0x0, omap 0x3a4de, meta 0x6075b22), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1489227 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 2211840 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121528320 unmapped: 2203648 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121528320 unmapped: 2203648 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121536512 unmapped: 2195456 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1490887 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.297217369s of 10.052202225s, submitted: 12
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121536512 unmapped: 2195456 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dd000/0x0/0x4ffc00000, data 0x296fe5f/0x2a6f000, compress 0x0/0x0/0x0, omap 0x3a907, meta 0x60756f9), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121536512 unmapped: 2195456 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121536512 unmapped: 2195456 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f70dd000/0x0/0x4ffc00000, data 0x296fe5f/0x2a6f000, compress 0x0/0x0/0x0, omap 0x3abcd, meta 0x6075433), peers [0,2] op hist [0,1])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121536512 unmapped: 2195456 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121552896 unmapped: 2179072 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1492913 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121561088 unmapped: 2170880 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121561088 unmapped: 2170880 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f70d9000/0x0/0x4ffc00000, data 0x2971a93/0x2a71000, compress 0x0/0x0/0x0, omap 0x3afa6, meta 0x607505a), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121561088 unmapped: 2170880 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 165 handle_osd_map epochs [165,166], i have 166, src has [1,166]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 2162688 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121577472 unmapped: 2154496 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 2146304 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 2146304 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 2146304 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 2146304 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 2146304 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 2146304 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 2146304 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 2146304 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 2146304 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121585664 unmapped: 2146304 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121593856 unmapped: 2138112 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1495687 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 2129920 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: do_command 'config diff' '{prefix=config diff}'
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: do_command 'config show' '{prefix=config show}'
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d6000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b2aa, meta 0x6074d56), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: do_command 'counter dump' '{prefix=counter dump}'
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 121561088 unmapped: 2170880 heap: 123731968 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: do_command 'counter schema' '{prefix=counter schema}'
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: mgrc handle_mgr_map Got map version 19
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 82.034767151s of 82.260131836s, submitted: 50
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 166 ms_handle_reset con 0x559008cf0400 session 0x559008dbb880
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122142720 unmapped: 3686400 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122224640 unmapped: 3604480 heap: 125829120 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: do_command 'log dump' '{prefix=log dump}'
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122232832 unmapped: 14639104 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: do_command 'perf dump' '{prefix=perf dump}'
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: mgrc handle_mgr_map Got map version 20
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/762197634,v1:192.168.122.100:6801/762197634]
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: do_command 'perf schema' '{prefix=perf schema}'
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122486784 unmapped: 14385152 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 14376960 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 14376960 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1494967 data_alloc: 218103808 data_used: 7845
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f70d8000/0x0/0x4ffc00000, data 0x2973512/0x2a74000, compress 0x0/0x0/0x0, omap 0x3b49b, meta 0x6074b65), peers [0,2] op hist [])
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 14376960 heap: 136871936 old mem: 2845415832 new mem: 2845415832
Dec  4 06:02:05 np0005545273 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 14376960 heap: 136871936 old mem: 2845415832 new mem: 2845415832
